Sample records for affects visual perception

  1. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  2. Perception and Attention for Visualization

    ERIC Educational Resources Information Center

    Haroz, Steve

    2013-01-01

    This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…

  3. Non-conscious visual cues related to affect and action alter perception of effort and endurance performance

    PubMed Central

    Blanchfield, Anthony; Hardy, James; Marcora, Samuele

    2014-01-01

    The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014

  4. The effect of contextual sound cues on visual fidelity perception.

    PubMed

    Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam

    2014-01-01

    Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.

  5. Auditory environmental context affects visual distance perception.

    PubMed

    Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O

    2017-08-03

    In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.

  6. The prevalence of visual hallucinations in non-affective psychosis, and the role of perception and attention.

    PubMed

    van Ommen, M M; van Beilen, M; Cornelissen, F W; Smid, H G O M; Knegtering, H; Aleman, A; van Laar, T

    2016-06-01

    Little is known about visual hallucinations (VH) in psychosis. We investigated the prevalence and the role of bottom-up and top-down processing in VH. The prevailing view is that VH are probably related to altered top-down processing, rather than to distorted bottom-up processing. Conversely, VH in Parkinson's disease are associated with impaired visual perception and attention, as proposed by the Perception and Attention Deficit (PAD) model. Auditory hallucinations (AH) in psychosis, however, are thought to be related to increased attention. Our retrospective database study included 1119 patients with non-affective psychosis and 586 controls. The Community Assessment of Psychic Experiences established the VH rate. Scores on visual perception tests [Degraded Facial Affect Recognition (DFAR), Benton Facial Recognition Task] and attention tests [Response Set-shifting Task, Continuous Performance Test-HQ (CPT-HQ)] were compared between 75 VH patients, 706 non-VH patients and 485 non-VH controls. The lifetime VH rate was 37%. The patient groups performed similarly on cognitive tasks; both groups showed worse perception (DFAR) than controls. Non-VH patients showed worse attention (CPT-HQ) than controls, whereas VH patients did not perform differently. We did not find significant VH-related impairments in bottom-up processing or direct top-down alterations. However, the results suggest a relatively spared attentional performance in VH patients, whereas face perception and processing speed were equally impaired in both patient groups relative to controls. This would match better with the increased attention hypothesis than with the PAD model. Our finding that VH frequently co-occur with AH may support an increased attention-induced 'hallucination proneness'.

  7. Visually suboptimal bananas: How ripeness affects consumer expectation and perception.

    PubMed

    Symmank, Claudia; Zahn, Susann; Rohm, Harald

    2018-01-01

    One reason for the significant amount of food that is wasted in developed countries is that consumers often expect visually suboptimal food as being less palatable. Using bananas as example, the objective of this study was to determine how appearance affects consumer overall liking, the rating of sensory attributes, purchase intention, and the intended use of bananas. The ripeness degree (RD) of the samples was adjusted to RD 5 (control) and RD 7 (more ripened, visually suboptimal). After preliminary experiments, a total of 233 participants were asked to judge their satisfaction with the intensity of sensory attributes that referred to flavor, taste, and texture using just-about-right scales. Subjects who received peeled samples were asked after tasting, whereas subjects who received unpeeled bananas judged expectation and, after peeling and tasting, perception. Expected overall liking and purchase intention were significantly lower for RD 7 bananas. Purchase intention was still significantly different between RD 5 and RD 7 after tasting, whereas no difference in overall liking was observed. Significant differences between RD 5 and RD 7 were observed when asking participants for their intended use of the bananas. Concerning the sensory attributes, penalty analysis revealed that only the firmness of the RD 7 bananas was still not just-about-right after tasting. The importance that consumers attribute to the shelf-life of food had a pronounced impact on purchase intention of bananas with different ripeness degree. In the case of suboptimal bananas, the results demonstrate a positive relationship between the sensory perception and overall liking and purchase intention. Convincing consumers that visually suboptimal food is still tasty is of high relevance for recommending different ways of communication. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. How does parents' visual perception of their child's weight status affect their feeding style?

    PubMed

    Yilmaz, Resul; Erkorkmaz, Ünal; Ozcetin, Mustafa; Karaaslan, Erhan

    2013-01-01

    Eating style is one of the prominente factors that determine energy intake. One of the influencing factors that determine parental feeding style is parental perception of the weight status of the child. The aim of this study is to evaluate the relationship between maternal visual perception of their children's weight status and their feeding style. A cross-sectional survey was completed with only mother's of 380 preschool children with age of 5 to 7 (6.14 years). Visual perception scores were measured with a sketch and maternal feeding style was measured with validated "Parental Feeding Style Questionnaire". The parental feeding dimensions "emotional feeding" and "encouragement to eat" subscale scores were low in overweight children according to visual perception classification. "Emotional feeding" and "permissive control" subscale scores were statistically different in children classified as correctly perceived and incorrectly low perceived group due to maternal misperception. Various feeding styles were related to maternal visual perception. The best approach to preventing obesity and underweight may be to focus on achieving correct parental perception of the weight status of their children, thus improving parental skills and leading them to implement proper feeding styles. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.

  9. Re-entrant Projections Modulate Visual Cortex in Affective Perception: Evidence From Granger Causality Analysis

    PubMed Central

    Keil, Andreas; Sabatinelli, Dean; Ding, Mingzhou; Lang, Peter J.; Ihssen, Niklas; Heim, Sabine

    2013-01-01

    Re-entrant modulation of visual cortex has been suggested as a critical process for enhancing perception of emotionally arousing visual stimuli. This study explores how the time information inherent in large-scale electrocortical measures can be used to examine the functional relationships among the structures involved in emotional perception. Granger causality analysis was conducted on steady-state visual evoked potentials elicited by emotionally arousing pictures flickering at a rate of 10 Hz. This procedure allows one to examine the direction of neural connections. Participants viewed pictures that varied in emotional content, depicting people in neutral contexts, erotica, or interpersonal attack scenes. Results demonstrated increased coupling between visual and cortical areas when viewing emotionally arousing content. Specifically, intraparietal to inferotemporal and precuneus to calcarine connections were stronger for emotionally arousing picture content. Thus, we provide evidence for re-entrant signal flow during emotional perception, which originates from higher tiers and enters lower tiers of visual cortex. PMID:18095279

  10. Visual perception of ADHD children with sensory processing disorder.

    PubMed

    Jung, Hyerim; Woo, Young Jae; Kang, Je Wook; Choi, Yeon Woo; Kim, Kyeong Mi

    2014-04-01

    The aim of the present study was to investigate the visual perception difference between ADHD children with and without sensory processing disorder, and the relationship between sensory processing and visual perception of the children with ADHD. Participants were 47 outpatients, aged 6-8 years, diagnosed with ADHD. After excluding those who met exclusion criteria, 38 subjects were clustered into two groups, ADHD children with and without sensory processing disorder (SPD), using SSP reported by their parents, then subjects completed K-DTVP-2. Spearman correlation analysis was run to determine the relationship between sensory processing and visual perception, and Mann-Whitney-U test was conducted to compare the K-DTVP-2 score of two groups respectively. The ADHD children with SPD performed inferiorly to ADHD children without SPD in the on 3 quotients of K-DTVP-2. The GVP of K-DTVP-2 score was related to Movement Sensitivity section (r=0.368(*)) and Low Energy/Weak section of SSP (r=0.369*). The result of the present study suggests that among children with ADHD, the visual perception is lower in those children with co-morbid SPD. Also, visual perception may be related to sensory processing, especially in the reactions of vestibular and proprioceptive senses. Regarding academic performance, it is necessary to consider how sensory processing issues affect visual perception in children with ADHD.

  11. A Comparative Study on the Visual Perceptions of Children with Attention Deficit Hyperactivity Disorder

    NASA Astrophysics Data System (ADS)

    Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur

    This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.

  12. Adaptive design of visual perception experiments

    NASA Astrophysics Data System (ADS)

    O'Connor, John D.; Hixson, Jonathan; Thomas, James M., Jr.; Peterson, Matthew S.; Parasuraman, Raja

    2010-04-01

    Meticulous experimental design may not always prevent confounds from affecting experimental data acquired during visual perception experiments. Although experimental controls reduce the potential effects of foreseen sources of interference, interaction, or noise, they are not always adequate for preventing the confounding effects of unforeseen forces. Visual perception experimentation is vulnerable to unforeseen confounds because of the nature of the associated cognitive processes involved in the decision task. Some confounds are beyond the control of experimentation, such as what a participant does immediately prior to experimental participation, or the participant's attitude or emotional state. Other confounds may occur through ignorance of practical control methods on the part of the experiment's designer. The authors conducted experiments related to experimental fatigue and initially achieved significant results that were, upon re-examination, attributable to a lack of adequate controls. Re-examination of the original results and the processes and events that led to them yielded a second experimental design with more experimental controls and significantly different results. The authors propose that designers of visual perception experiments can benefit from planning to use a test-fix-test or adaptive experimental design cycle, so that unforeseen confounds in the initial design can be remedied.

  13. Visual Imagery without Visual Perception?

    ERIC Educational Resources Information Center

    Bertolo, Helder

    2005-01-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…

  14. Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.

    PubMed

    Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro

    2017-08-01

    Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.

  15. [Visual perception and its disorders].

    PubMed

    Ruf-Bächtiger, L

    1989-11-21

    It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.

  16. Interoceptive signals impact visual processing: Cardiac modulation of visual body perception.

    PubMed

    Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf

    2017-09-01

    Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Visual motion perception predicts driving hazard perception ability.

    PubMed

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  18. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  20. Illusion in reality: visual perception in displays

    NASA Astrophysics Data System (ADS)

    Kaufman, Lloyd; Kaufman, James H.

    2001-06-01

    Research into visual perception ultimately affects display design. Advance in display technology affects, in turn, our study of perception. Although this statement is too general to provide controversy, this paper present a real-life example that may prompt display engineers to make greater use of basic knowledge of visual perception, and encourage those who study perception to track more closely leading edge display technology. Our real-life example deals with an ancient problem, the moon illusion: why does the horizon moon appear so large while the elevated moon look so small. This was a puzzle for many centuries. Physical explanations, such as refraction by the atmosphere, are incorrect. The difference in apparent size may be classified as a misperception, so the answer must lie in the general principles of visual perception. The factors underlying the moon illusion must be the same factors as those that enable us to perceive the sizes of ordinary objects in visual space. Progress toward solving the problem has been irregular, since methods for actually measuring the illusion under a wide range of conditions were lacking. An advance in display technology made possible a serious and methodologically controlled study of the illusion. This technology was the first heads-up display. In this paper we will describe how the heads-up display concept made it possible to test several competing theories of the moon illusion, and how it led to an explanation that stood for nearly 40 years. We also consider the criticisms of that explanation and how the optics of the heads-up display also played a role in providing data for the critics. Finally, we will describe our own advance on the original methodology. This advance was motivated by previously unrelated principles of space perception. We used a stereoscopic heads up display to test alternative hypothesis about the illusion and to discrimate between two classes of mutually contradictory theories. At its core, the

  1. The effect of phasic auditory alerting on visual perception.

    PubMed

    Petersen, Anders; Petersen, Annemarie Hilkjær; Bundesen, Claus; Vangkilde, Signe; Habekost, Thomas

    2017-08-01

    Phasic alertness refers to a short-lived change in the preparatory state of the cognitive system following an alerting signal. In the present study, we examined the effect of phasic auditory alerting on distinct perceptual processes, unconfounded by motor components. We combined an alerting/no-alerting design with a pure accuracy-based single-letter recognition task. Computational modeling based on Bundesen's Theory of Visual Attention was used to examine the effect of phasic alertness on visual processing speed and threshold of conscious perception. Results show that phasic auditory alertness affects visual perception by increasing the visual processing speed and lowering the threshold of conscious perception (Experiment 1). By manipulating the intensity of the alerting cue, we further observed a positive relationship between alerting intensity and processing speed, which was not seen for the threshold of conscious perception (Experiment 2). This was replicated in a third experiment, in which pupil size was measured as a physiological marker of alertness. Results revealed that the increase in processing speed was accompanied by an increase in pupil size, substantiating the link between alertness and processing speed (Experiment 3). The implications of these results are discussed in relation to a newly developed mathematical model of the relationship between levels of alertness and the speed with which humans process visual information. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Refractive errors affect the vividness of visual mental images.

    PubMed

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception.

  3. Bodily action penetrates affective perception

    PubMed Central

    Rigutti, Sara; Gerbino, Walter

    2016-01-01

    Fantoni & Gerbino (2014) showed that subtle postural shifts associated with reaching can have a strong hedonic impact and affect how actors experience facial expressions of emotion. Using a novel Motor Action Mood Induction Procedure (MAMIP), they found consistent congruency effects in participants who performed a facial emotion identification task after a sequence of visually-guided reaches: a face perceived as neutral in a baseline condition appeared slightly happy after comfortable actions and slightly angry after uncomfortable actions. However, skeptics about the penetrability of perception (Zeimbekis & Raftopoulos, 2015) would consider such evidence insufficient to demonstrate that observer’s internal states induced by action comfort/discomfort affect perception in a top-down fashion. The action-modulated mood might have produced a back-end memory effect capable of affecting post-perceptual and decision processing, but not front-end perception. Here, we present evidence that performing a facial emotion detection (not identification) task after MAMIP exhibits systematic mood-congruent sensitivity changes, rather than response bias changes attributable to cognitive set shifts; i.e., we show that observer’s internal states induced by bodily action can modulate affective perception. The detection threshold for happiness was lower after fifty comfortable than uncomfortable reaches; while the detection threshold for anger was lower after fifty uncomfortable than comfortable reaches. Action valence induced an overall sensitivity improvement in detecting subtle variations of congruent facial expressions (happiness after positive comfortable actions, anger after negative uncomfortable actions), in the absence of significant response bias shifts. Notably, both comfortable and uncomfortable reaches impact sensitivity in an approximately symmetric way relative to a baseline inaction condition. All of these constitute compelling evidence of a genuine top-down effect on

  4. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    PubMed Central

    Wilson, Amanda H.; Paré, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. Results In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. Conclusions The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze. PMID:27537379

  5. Visual imagery without visual perception: lessons from blind subjects

    NASA Astrophysics Data System (ADS)

    Bértolo, Helder

    2014-08-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.

  6. Emotion and Perception: The Role of Affective Information

    PubMed Central

    Zadra, Jonathan R.; Clore, Gerald L.

    2011-01-01

    Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565

  7. Refractive Errors Affect the Vividness of Visual Mental Images

    PubMed Central

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception. PMID:23755186

  8. Separate visual representations for perception and for visually guided behavior

    NASA Technical Reports Server (NTRS)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  9. Does the Visual Appeal of Instructional Media Affect Learners' Motivation toward Learning?

    ERIC Educational Resources Information Center

    Tomita, Kei

    2018-01-01

    While authors like Mayer (2009) suggest that designers should avoid using visuals for the purpose of attracting learners' interests, some scholars suggest that visuals could influence learners' emotions. In this study the author investigated whether the perception of the visual appeal of instructional handouts affects learners' self-reported…

  10. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  11. Saccadic Corollary Discharge Underlies Stable Visual Perception

    PubMed Central

    Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.

    2016-01-01

    Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs

  12. Varieties of cognitive penetration in visual perception.

    PubMed

    Vetter, Petra; Newen, Albert

    2014-07-01

    Is our perceptual experience a veridical representation of the world or is it a product of our beliefs and past experiences? Cognitive penetration describes the influence of higher level cognitive factors on perceptual experience and has been a debated topic in philosophy of mind and cognitive science. Here, we focus on visual perception, particularly early vision, and how it is affected by contextual expectations and memorized cognitive contents. We argue for cognitive penetration based on recent empirical evidence demonstrating contextual and top-down influences on early visual processes. On the basis of a perceptual model, we propose different types of cognitive penetration depending on the processing level on which the penetration happens and depending on where the penetrating influence comes from. Our proposal has two consequences: (1) the traditional controversy on whether cognitive penetration occurs or not is ill posed, and (2) a clear-cut perception-cognition boundary cannot be maintained. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    PubMed Central

    Alm, Magnus; Behne, Dawn

    2015-01-01

    Gender and age have been found to affect adults’ audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20–30 years) and middle-aged adults (50–60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses. PMID:26236274

  14. Contributions of visual and embodied expertise to body perception.

    PubMed

    Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D

    2012-01-01

    Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.

  15. Perceptions of the Visually Impaired toward Pursuing Geography Courses and Majors in Higher Education

    ERIC Educational Resources Information Center

    Murr, Christopher D.; Blanchard, R. Denise

    2011-01-01

    Advances in classroom technology have lowered barriers for the visually impaired to study geography, yet few participate. Employing stereotype threat theory, we examined whether beliefs held by the visually impaired affect perceptions toward completing courses and majors in visually oriented disciplines. A test group received a low-level threat…

  16. Reward priming eliminates color-driven affect in perception.

    PubMed

    Hu, Kesong

    2018-01-03

    Brain and behavior evidence suggests that colors have distinct affective properties. Here, we investigated how reward influences color-driven affect in perception. In Experiment 1, we assessed competition between blue and red patches during a temporal-order judgment (TOJ) across a range of stimulus onset asynchronies (SOAs). During the value reinforcement, reward was linked to either blue (version 1) or red (version 2) in the experiment. The same stimuli then served as test ones in the following unrewarded, unspeeded TOJ task. Our analysis showed that blue patches were consistently seen as occurring first, even when objectively appearing 2nd at short SOAs. This accelerated perception of blue over red was disrupted by prior primes related to reward (vs. neutral) but not perceptional (blue vs. red) priming. Experiment 2 replicated the findings of Experiment 1 while uncoupling action and stimulus values. These results are consistent with the blue-approach and red-avoidance motivation hypothesis and highlight an active nature of the association of reward priming and color processing. Together, the present study implies a link between reward and color affect and contributes to the understanding of how reward influences color affect in visual processing.

  17. Visual adaptation and face perception.

    PubMed

    Webster, Michael A; MacLeod, Donald I A

    2011-06-12

    The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.

  18. Turbidity in oil-in-water-emulsions - Key factors and visual perception.

    PubMed

    Linke, C; Drusch, S

    2016-11-01

    The aim of the present study is to systematically describe the factors affecting turbidity in beverage emulsions and to get a better understanding of visual perception of turbidity. The sensory evaluation of the human visual perception of turbidity showed that humans are most sensitive to turbidity differences between two samples in the range between 1000 and 1500 NTU (ratio) (nephelometric turbidity units). At very high turbidity values >2000 TU in NTU (ratio) were needed to distinguish between samples that they were perceived significantly different. Particle size was the most important factor affecting turbidity. It was shown that a maximum turbidity occurs at a mean volume - surface diameter of 0.2μm for the oil droplet size. Additional parameters were the refractive index, the composition of the aqueous phase and the presence of excess emulsifier. In a concentration typical for a beverage emulsion a change in the refractive index of the oil phase may allow the alteration of turbidity by up to 30%. With the knowledge on visual perception of turbidity and the determining factors, turbidity can be tailored in product development according to the customer requirements and in quality control to define acceptable variations in optical appearance. Copyright © 2016. Published by Elsevier Ltd.

  19. Cortical visual prostheses: from microstimulation to functional percept

    NASA Astrophysics Data System (ADS)

    Najarpour Foroushani, Armin; Pack, Christopher C.; Sawan, Mohamad

    2018-04-01

    Cortical visual prostheses are intended to restore vision by targeted electrical stimulation of the visual cortex. The perception of spots of light, called phosphenes, resulting from microstimulation of the visual pathway, suggests the possibility of creating meaningful percept made of phosphenes. However, to date electrical stimulation of V1 has still not resulted in perception of phosphenated images that goes beyond punctate spots of light. In this review, we summarize the clinical and experimental progress that has been made in generating phosphenes and modulating their associated perceptual characteristics in human and macaque primary visual cortex (V1). We focus specifically on the effects of different microstimulation parameters on perception and we analyse key challenges facing the generation of meaningful artificial percepts. Finally, we propose solutions to these challenges based on the application of supervised learning of population codes for spatial stimulation of visual cortex.

  20. Reduced sensitivity for visual textures affects judgments of shape-from-shading and step-climbing behaviour in older adults.

    PubMed

    Schofield, Andrew J; Curzon-Jones, Benjamin; Hollands, Mark A

    2017-02-01

    Falls on stairs are a major hazard for older adults. Visual decline in normal ageing can affect step-climbing ability, altering gait and reducing toe clearance. Here we show that a loss of fine-grained visual information associated with age can affect the perception of surface undulations in patterned surfaces. We go on to show that such cues affect the limb trajectories of young adults, but due to their lack of sensitivity, not that of older adults. Interestingly neither the perceived height of a step nor conscious awareness is altered by our visual manipulation, but stepping behaviour is, suggesting that the influence of shape perception on stepping behaviour is via the unconscious, action-centred, dorsal visual pathway.

  1. Rubber Hand Illusion Affects Joint Angle Perception

    PubMed Central

    Butz, Martin V.; Kutter, Esther F.; Lorenz, Corinna

    2014-01-01

    The Rubber Hand Illusion (RHI) is a well-established experimental paradigm. It has been shown that the RHI can affect hand location estimates, arm and hand motion towards goals, the subjective visual appearance of the own hand, and the feeling of body ownership. Several studies also indicate that the peri-hand space is partially remapped around the rubber hand. Nonetheless, the question remains if and to what extent the RHI can affect the perception of other body parts. In this study we ask if the RHI can alter the perception of the elbow joint. Participants had to adjust an angular representation on a screen according to their proprioceptive perception of their own elbow joint angle. The results show that the RHI does indeed alter the elbow joint estimation, increasing the agreement with the position and orientation of the artificial hand. Thus, the results show that the brain does not only adjust the perception of the hand in body-relative space, but it also modifies the perception of other body parts. In conclusion, we propose that the brain continuously strives to maintain a consistent internal body image and that this image can be influenced by the available sensory information sources, which are mediated and mapped onto each other by means of a postural, kinematic body model. PMID:24671172

  2. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  3. The Affective Bases of Risk Perception: Negative Feelings and Stress Mediate the Relationship between Mental Imagery and Risk Perception.

    PubMed

    Sobkow, Agata; Traczyk, Jakub; Zaleskiewicz, Tomasz

    2016-01-01

    Recent research has documented that affect plays a crucial role in risk perception. When no information about numerical risk estimates is available (e.g., probability of loss or magnitude of consequences), people may rely on positive and negative affect toward perceived risk. However, determinants of affective reactions to risks are poorly understood. In a series of three experiments, we addressed the question of whether and to what degree mental imagery eliciting negative affect and stress influences risk perception. In each experiment, participants were instructed to visualize consequences of risk taking and to rate riskiness. In Experiment 1, participants who imagined negative risk consequences reported more negative affect and perceived risk as higher compared to the control condition. In Experiment 2, we found that this effect was driven by affect elicited by mental imagery rather than its vividness and intensity. In this study, imagining positive risk consequences led to lower perceived risk than visualizing negative risk consequences. Finally, we tested the hypothesis that negative affect related to higher perceived risk was caused by negative feelings of stress. In Experiment 3, we introduced risk-irrelevant stress to show that participants in the stress condition rated perceived risk as higher in comparison to the control condition. This experiment showed that higher ratings of perceived risk were influenced by psychological stress. Taken together, our results demonstrate that affect-laden mental imagery dramatically changes risk perception through negative affect (i.e., psychological stress).

  4. The Affective Bases of Risk Perception: Negative Feelings and Stress Mediate the Relationship between Mental Imagery and Risk Perception

    PubMed Central

    Sobkow, Agata; Traczyk, Jakub; Zaleskiewicz, Tomasz

    2016-01-01

    Recent research has documented that affect plays a crucial role in risk perception. When no information about numerical risk estimates is available (e.g., probability of loss or magnitude of consequences), people may rely on positive and negative affect toward perceived risk. However, determinants of affective reactions to risks are poorly understood. In a series of three experiments, we addressed the question of whether and to what degree mental imagery eliciting negative affect and stress influences risk perception. In each experiment, participants were instructed to visualize consequences of risk taking and to rate riskiness. In Experiment 1, participants who imagined negative risk consequences reported more negative affect and perceived risk as higher compared to the control condition. In Experiment 2, we found that this effect was driven by affect elicited by mental imagery rather than its vividness and intensity. In this study, imagining positive risk consequences led to lower perceived risk than visualizing negative risk consequences. Finally, we tested the hypothesis that negative affect related to higher perceived risk was caused by negative feelings of stress. In Experiment 3, we introduced risk-irrelevant stress to show that participants in the stress condition rated perceived risk as higher in comparison to the control condition. This experiment showed that higher ratings of perceived risk were influenced by psychological stress. Taken together, our results demonstrate that affect-laden mental imagery dramatically changes risk perception through negative affect (i.e., psychological stress). PMID:27445901

  5. Subliminal perception of complex visual stimuli.

    PubMed

    Ionescu, Mihai Radu

    2016-01-01

    Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.

  6. Toward Model Building for Visual Aesthetic Perception

    PubMed Central

    Lughofer, Edwin; Zeng, Xianyi

    2017-01-01

    Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194

  7. Visual adaptation and face perception

    PubMed Central

    Webster, Michael A.; MacLeod, Donald I. A.

    2011-01-01

    The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555

  8. Visual Motion Perception and Visual Attentive Processes.

    DTIC Science & Technology

    1988-04-01

    88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical

  9. Do you see what I see? The difference between dog and human visual perception may affect the outcome of experiments.

    PubMed

    Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András

    2017-07-01

    The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Visual motion integration for perception and pursuit

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Beutter, B. R.; Lorenceau, J.

    2000-01-01

    To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.

  11. Visual perception and imagery: a new molecular hypothesis.

    PubMed

    Bókkon, I

    2009-05-01

    Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated

  12. Differential temporal dynamics during visual imagery and perception.

    PubMed

    Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj

    2018-05-29

    Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.

  13. Visual Perception of Force: Comment on White (2012)

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.

    2012-01-01

    White (2012) proposed that kinematic features in a visual percept are matched to stored representations containing information regarding forces (based on prior haptic experience) and that information in the matched, stored representations regarding forces is then incorporated into visual perception. Although some elements of White's (2012) account…

  14. Perception, Cognition, and Effectiveness of Visualizations with Applications in Science and Engineering

    NASA Astrophysics Data System (ADS)

    Borkin, Michelle A.

    Visualization is a powerful tool for data exploration and analysis. With data ever-increasing in quantity and becoming integrated into our daily lives, having effective visualizations is necessary. But how does one design an effective visualization? To answer this question we need to understand how humans perceive, process, and understand visualizations. Through visualization evaluation studies we can gain deeper insight into the basic perception and cognition theory of visualizations, both through domain-specific case studies as well as generalized laboratory experiments. This dissertation presents the results of four evaluation studies, each of which contributes new knowledge to the theory of perception and cognition of visualizations. The results of these studies include a deeper clearer understanding of how color, data representation dimensionality, spatial layout, and visual complexity affect a visualization's effectiveness, as well as how visualization types and visual attributes affect the memorability of a visualization. We first present the results of two domain-specific case study evaluations. The first study is in the field of biomedicine in which we developed a new heart disease diagnostic tool, and conducted a study to evaluate the effectiveness of 2D versus 3D data representations as well as color maps. In the second study, we developed a new visualization tool for filesystem provenance data with applications in computer science and the sciences more broadly. We additionally developed a new time-based hierarchical node grouping method. We then conducted a study to evaluate the effectiveness of the new tool with its radial layout versus the conventional node-link diagram, and the new node grouping method. Finally, we discuss the results of two generalized studies designed to understand what makes a visualization memorable. In the first evaluation we focused on visualization memorability and conducted an online study using Amazon's Mechanical Turk with

  15. Learning what to expect (in visual perception)

    PubMed Central

    Seriès, Peggy; Seitz, Aaron R.

    2013-01-01

    Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors. PMID:24187536

  16. Visual Arts Teaching in Kindergarten through 3rd-Grade Classrooms in the UAE: Teacher Profiles, Perceptions, and Practices

    ERIC Educational Resources Information Center

    Buldu, Mehmet; Shaban, Mohamed S.

    2010-01-01

    This study portrayed a picture of kindergarten through 3rd-grade teachers who teach visual arts, their perceptions of the value of visual arts, their visual arts teaching practices, visual arts experiences provided to young learners in school, and major factors and/or influences that affect their teaching of visual arts. The sample for this study…

  17. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  18. Statistical regularities in art: Relations with visual coding and perception.

    PubMed

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. Auditory emotional cues enhance visual perception.

    PubMed

    Zeelenberg, René; Bocanegra, Bruno R

    2010-04-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.

  20. Do Visual Illusions Probe the Visual Brain?: Illusions in Action without a Dorsal Visual Stream

    ERIC Educational Resources Information Center

    Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves

    2007-01-01

    Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral,…

  1. Does visual attention drive the dynamics of bistable perception?

    PubMed

    Dieter, Kevin C; Brascamp, Jan; Tadin, Duje; Blake, Randolph

    2016-10-01

    How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct-depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences.

  2. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech

    PubMed Central

    Alcalá-Quintana, Rocío

    2015-01-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal. PMID:27551361

  3. Disentangling visual imagery and perception of real-world objects

    PubMed Central

    Lee, Sue-Hyun; Kravitz, Dwight J.; Baker, Chris I.

    2011-01-01

    During mental imagery, visual representations can be evoked in the absence of “bottom-up” sensory input. Prior studies have reported similar neural substrates for imagery and perception, but studies of brain-damaged patients have revealed a double dissociation with some patients showing preserved imagery in spite of impaired perception and others vice versa. Here, we used fMRI and multi-voxel pattern analysis to investigate the specificity, distribution, and similarity of information for individual seen and imagined objects to try and resolve this apparent contradiction. In an event-related design, participants either viewed or imagined individual named object images on which they had been trained prior to the scan. We found that the identity of both seen and imagined objects could be decoded from the pattern of activity throughout the ventral visual processing stream. Further, there was enough correspondence between imagery and perception to allow discrimination of individual imagined objects based on the response during perception. However, the distribution of object information across visual areas was strikingly different during imagery and perception. While there was an obvious posterior-anterior gradient along the ventral visual stream for seen objects, there was an opposite gradient for imagined objects. Moreover, the structure of representations (i.e. the pattern of similarity between responses to all objects) was more similar during imagery than perception in all regions along the visual stream. These results suggest that while imagery and perception have similar neural substrates, they involve different network dynamics, resolving the tension between previous imaging and neuropsychological studies. PMID:22040738

  4. The development of visual speech perception in Mandarin Chinese-speaking children.

    PubMed

    Chen, Liang; Lei, Jianghua

    2017-01-01

    The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.

  5. Perception of Visual Speed While Moving

    ERIC Educational Resources Information Center

    Durgin, Frank H.; Gigone, Krista; Scott, Rebecca

    2005-01-01

    During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…

  6. Exploring the Link between Visual Perception, Visual-Motor Integration, and Reading in Normal Developing and Impaired Children using DTVP-2.

    PubMed

    Bellocchi, Stéphanie; Muneaux, Mathilde; Huau, Andréa; Lévêque, Yohana; Jover, Marianne; Ducrot, Stéphanie

    2017-08-01

    Reading is known to be primarily a linguistic task. However, to successfully decode written words, children also need to develop good visual-perception skills. Furthermore, motor skills are implicated in letter recognition and reading acquisition. Three studies have been designed to determine the link between reading, visual perception, and visual-motor integration using the Developmental Test of Visual Perception version 2 (DTVP-2). Study 1 tests how visual perception and visual-motor integration in kindergarten predict reading outcomes in Grade 1, in typical developing children. Study 2 is aimed at finding out if these skills can be seen as clinical markers in dyslexic children (DD). Study 3 determines if visual-motor integration and motor-reduced visual perception can distinguish DD children according to whether they exhibit or not developmental coordination disorder (DCD). Results showed that phonological awareness and visual-motor integration predicted reading outcomes one year later. DTVP-2 demonstrated similarities and differences in visual-motor integration and motor-reduced visual perception between children with DD, DCD, and both of these deficits. DTVP-2 is a suitable tool to investigate links between visual perception, visual-motor integration and reading, and to differentiate cognitive profiles of children with developmental disabilities (i.e. DD, DCD, and comorbid children). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. [Perception of physiological visual illusions by individuals with schizophrenia].

    PubMed

    Ciszewski, Słowomir; Wichowicz, Hubert Michał; Żuk, Krzysztof

    2015-01-01

    Visual perception by individuals with schizophrenia has not been extensively researched. The focus of this review is the perception of physiological visual illusions by patients with schizophrenia, a differences of perception reported in a small number of studies. Increased or decreased susceptibility of these patients to various illusions seems to be unconnected to the location of origin in the visual apparatus, which also takes place in illusions connected to other modalities. The susceptibility of patients with schizophrenia to haptic illusions has not yet been investigated, although the need for such investigation has been is clear. The emerging picture is that some individuals with schizophrenia are "resistant" to some of the illusions and are able to assess visual phenomena more "rationally", yet certain illusions (ex. Müller-Lyer's) are perceived more intensely. Disturbances in the perception of visual illusions have neither been classified as possible diagnostic indicators of a dangerous mental condition, nor included in the endophenotype of schizophrenia. Although the relevant data are sparse, the ability to replicate the results is limited, and the research model lacks a "gold standard", some preliminary conclusions may be drawn. There are indications that disturbances in visual perception are connected to the extent of disorganization, poor initial social functioning, poor prognosis, and the types of schizophrenia described as neurodevelopmental. Patients with schizophrenia usually fail to perceive those illusions that require volitional controlled attention, and show lack of sensitivity to the contrast between shape and background.

  8. Does visual attention drive the dynamics of bistable perception?

    PubMed Central

    Dieter, Kevin C.; Brascamp, Jan; Tadin, Duje; Blake, Randolph

    2016-01-01

    How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct – depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences. PMID:27230785

  9. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  10. Adaptive optics without altering visual perception.

    PubMed

    Koenig, D E; Hart, N W; Hofer, H J

    2014-04-01

    Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer et al., 2012). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    PubMed Central

    Fengler, Ineke; Nava, Elena; Röder, Brigitte

    2015-01-01

    Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166

  12. Reading Disability and Visual Perception in Families: New Findings.

    ERIC Educational Resources Information Center

    Oxford, Rebecca L.

    Frequently a variety of visual perception difficulties correlate with reading disabilities. A study was made to investigate the relationship between visual perception and reading disability in families, and to explore the genetic aspects of the relationship. One-hundred twenty-five reading-disabled students, ages 7.5 to 12 years, were matched with…

  13. High visual resolution matters in audiovisual speech perception, but only for some.

    PubMed

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  14. [Visual perception abilities in children with reading disabilities].

    PubMed

    Werpup-Stüwe, Lina; Petermann, Franz

    2015-05-01

    Visual perceptual abilities are increasingly being neglected in research concerning reading disabilities. This study measures the visual perceptual abilities of children with disabilities in reading. The visual perceptual abilities of 35 children with specific reading disorder and 30 controls were compared using the German version of the Developmental Test of Visual Perception – Adolescent and Adult (DTVP-A). 11 % of the children with specific reading disorder show clinically relevant performance on the DTVP-A. The perceptual abilities of both groups differ significantly. No significant group differences exist after controlling for general IQ or Perceptional Reasoning Index, but they do remain after controlling for Verbal Comprehension, Working Memory, and Processing Speed Index. The number of children with reading difficulties suffering from visual perceptual disorders has been underestimated. For this reason, visual perceptual abilities should always be tested when making a reading disorder diagnosis. Profiles of IQ-test results of children suffering from reading and visual perceptual disorders should be interpreted carefully.

  15. Eye movements and attention in reading, scene perception, and visual search.

    PubMed

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  16. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.

    PubMed

    Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint

    2017-09-13

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the

  17. Endogenous modulation of human visual cortex activity improves perception at twilight.

    PubMed

    Cordani, Lorenzo; Tagliazucchi, Enzo; Vetter, Céline; Hassemer, Christian; Roenneberg, Till; Stehle, Jörg H; Kell, Christian A

    2018-04-10

    Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.

  18. Visual affective classification by combining visual and text features.

    PubMed

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.

  19. Visual affective classification by combining visual and text features

    PubMed Central

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566

  20. Visual contribution to the multistable perception of speech.

    PubMed

    Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc

    2007-11-01

    The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.

  1. Visual processing and social cognition in schizophrenia: relationships among eye movements, biological motion perception, and empathy.

    PubMed

    Matsumoto, Yukiko; Takahashi, Hideyuki; Murai, Toshiya; Takahashi, Hidehiko

    2015-01-01

    Schizophrenia patients have impairments at several levels of cognition including visual attention (eye movements), perception, and social cognition. However, it remains unclear how lower-level cognitive deficits influence higher-level cognition. To elucidate the hierarchical path linking deficient cognitions, we focused on biological motion perception, which is involved in both the early stage of visual perception (attention) and higher social cognition, and is impaired in schizophrenia. Seventeen schizophrenia patients and 18 healthy controls participated in the study. Using point-light walker stimuli, we examined eye movements during biological motion perception in schizophrenia. We assessed relationships among eye movements, biological motion perception and empathy. In the biological motion detection task, schizophrenia patients showed lower accuracy and fixated longer than healthy controls. As opposed to controls, patients exhibiting longer fixation durations and fewer numbers of fixations demonstrated higher accuracy. Additionally, in the patient group, the correlations between accuracy and affective empathy index and between eye movement index and affective empathy index were significant. The altered gaze patterns in patients indicate that top-down attention compensates for impaired bottom-up attention. Furthermore, aberrant eye movements might lead to deficits in biological motion perception and finally link to social cognitive impairments. The current findings merit further investigation for understanding the mechanism of social cognitive training and its development. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  2. Attitudes towards and perceptions of visual loss and its causes among Hong Kong Chinese adults.

    PubMed

    Lau, Joseph Tak Fai; Lee, Vincent; Fan, Dorothy; Lau, Mason; Michon, John

    2004-06-01

    As part of a study of visual function among Hong Kong Chinese adults, their attitudes and perceptions related to visual loss were examined. These included fear of visual loss, negative functional impacts of visual loss, the relationship between ageing and visual loss and help-seeking behaviours related to visual loss. Demographic factors associated with these variables were also studied. The study population were people aged 40 and above randomly selected from the Shatin district of Hong Kong. The participants underwent eye examinations that included visual acuity, intraocular pressure measurement, visual field, slit-lamp biomicroscopy and ophthalmoscopy. The primary cause of visual disability was recorded. The participants were also asked about their attitudes and perceptions regarding visual loss using a structured questionnaire. The prevalence of bilateral visual disability was 2.2% among adults aged 40 or above and 6.4% among adults aged 60 or above. Nearly 36% of the participants selected blindness as the most feared disabling medical condition, which was substantially higher than conditions such as dementia, loss of limbs, deafness or aphasia. Inability to take care of oneself (21.0%), inconvenience related to mobility (20.2%) and inability to work (14.8%) were the three most commonly mentioned 'worst impact' effects of visual loss. Fully 68% of the participants believed that loss of vision is related to ageing. A majority of participants would seek help and advice from family members in case of visual loss. Visual function is perceived to be very important by Hong Kong Chinese adults. The fear of visual loss is widespread and particularly affects self-care and functional abilities. Visual loss is commonly seen as related to ageing. Attitudes and perceptions in this population may be modified by educational and outreach efforts in order to take advantage of preventive measures.

  3. Adaptive optics without altering visual perception

    PubMed Central

    DE, Koenig; NW, Hart; HJ, Hofer

    2014-01-01

    Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer, H. J., Blaschke, J., Patolia, J., & Koenig, D. E. (2012). Fixation light hue bias revisited: Implications for using adaptive optics to study color vision. Vision Research, 56, 49-56). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon. PMID:24607992

  4. Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    PubMed Central

    Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.

    2011-01-01

    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344

  5. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    PubMed

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  6. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music

    PubMed Central

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007

  7. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music.

    PubMed

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.

  8. Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas.

    PubMed

    Dijkstra, Nadine; Bosch, Sander E; van Gerven, Marcel A J

    2017-02-01

    Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest. Visual imagery is the ability to visualize objects that are not in our direct line of sight: something that is important for memory, spatial reasoning, and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study, we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is. Copyright © 2017 the authors 0270-6474/17/371367-07$15.00/0.

  9. Improving visual perception through neurofeedback

    PubMed Central

    Scharnowski, Frank; Hutton, Chloe; Josephs, Oliver; Weiskopf, Nikolaus; Rees, Geraint

    2012-01-01

    Perception depends on the interplay of ongoing spontaneous activity and stimulus-evoked activity in sensory cortices. This raises the possibility that training ongoing spontaneous activity alone might be sufficient for enhancing perceptual sensitivity. To test this, we trained human participants to control ongoing spontaneous activity in circumscribed regions of retinotopic visual cortex using real-time functional MRI based neurofeedback. After training, we tested participants using a new and previously untrained visual detection task that was presented at the visual field location corresponding to the trained region of visual cortex. Perceptual sensitivity was significantly enhanced only when participants who had previously learned control over ongoing activity were now exercising control, and only for that region of visual cortex. Our new approach allows us to non-invasively and non-pharmacologically manipulate regionally specific brain activity, and thus provide ‘brain training’ to deliver particular perceptual enhancements. PMID:23223302

  10. Attentional Episodes in Visual Perception

    ERIC Educational Resources Information Center

    Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…

  11. Visual Perception Based Rate Control Algorithm for HEVC

    NASA Astrophysics Data System (ADS)

    Feng, Zeqi; Liu, PengYu; Jia, Kebin

    2018-01-01

    For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.

  12. Optical phonetics and visual perception of lexical and phrasal stress in English.

    PubMed

    Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer

    2009-01-01

    In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.

  13. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  14. Visual body perception in anorexia nervosa.

    PubMed

    Urgesi, Cosimo; Fornasari, Livia; Perini, Laura; Canalaz, Francesca; Cremaschi, Silvana; Faleschini, Laura; Balestrieri, Matteo; Fabbro, Franco; Aglioti, Salvatore Maria; Brambilla, Paolo

    2012-05-01

    Disturbance of body perception is a central aspect of anorexia nervosa (AN) and several neuroimaging studies have documented structural and functional alterations of occipito-temporal cortices involved in visual body processing. However, it is unclear whether these perceptual deficits involve more basic aspects of others' body perception. A consecutive sample of 15 adolescent patients with AN were compared with a group of 15 age- and gender-matched controls in delayed matching to sample tasks requiring the visual discrimination of the form or of the action of others' body. Patients showed better visual discrimination performance than controls in detail-based processing of body forms but not of body actions, which positively correlated with their increased tendency to convert a signal of punishment into a signal of reinforcement (higher persistence scores). The paradoxical advantage of patients with AN in detail-based body processing may be associated to their tendency to routinely explore body parts as a consequence of their obsessive worries about body appearance. Copyright © 2012 Wiley Periodicals, Inc.

  15. Affect of the unconscious: Visually suppressed angry faces modulate our decisions

    PubMed Central

    Pajtas, Petra E.; Mahon, Bradford Z.; Nakayama, Ken; Caramazza, Alfonso

    2016-01-01

    Emotional and affective processing imposes itself over cognitive processes and modulates our perception of the surrounding environment. In two experiments, we addressed the issue of whether nonconscious processing of affect can take place even under deep states of unawareness, such as those induced by interocular suppression techniques, and can elicit an affective response that can influence our understanding of the surrounding environment. In Experiment 1, participants judged the likeability of an unfamiliar item—a Chinese character—that was preceded by a face expressing a particular emotion (either happy or angry). The face was rendered invisible through an interocular suppression technique (continuous flash suppression; CFS). In Experiment 2, backward masking (BM), a less robust masking technique, was used to render the facial expressions invisible. We found that despite equivalent phenomenological suppression of the visual primes under CFS and BM, different patterns of affective processing were obtained with the two masking techniques. Under BM, nonconscious affective priming was obtained for both happy and angry invisible facial expressions. However, under CFS, nonconscious affective priming was obtained only for angry facial expressions. We discuss an interpretation of this dissociation between affective processing and visual masking techniques in terms of distinct routes from the retina to the amygdala. PMID:23224765

  16. Affect of the unconscious: visually suppressed angry faces modulate our decisions.

    PubMed

    Almeida, Jorge; Pajtas, Petra E; Mahon, Bradford Z; Nakayama, Ken; Caramazza, Alfonso

    2013-03-01

    Emotional and affective processing imposes itself over cognitive processes and modulates our perception of the surrounding environment. In two experiments, we addressed the issue of whether nonconscious processing of affect can take place even under deep states of unawareness, such as those induced by interocular suppression techniques, and can elicit an affective response that can influence our understanding of the surrounding environment. In Experiment 1, participants judged the likeability of an unfamiliar item--a Chinese character--that was preceded by a face expressing a particular emotion (either happy or angry). The face was rendered invisible through an interocular suppression technique (continuous flash suppression; CFS). In Experiment 2, backward masking (BM), a less robust masking technique, was used to render the facial expressions invisible. We found that despite equivalent phenomenological suppression of the visual primes under CFS and BM, different patterns of affective processing were obtained with the two masking techniques. Under BM, nonconscious affective priming was obtained for both happy and angry invisible facial expressions. However, under CFS, nonconscious affective priming was obtained only for angry facial expressions. We discuss an interpretation of this dissociation between affective processing and visual masking techniques in terms of distinct routes from the retina to the amygdala.

  17. Effects of color combination and ambient illumination on visual perception time with TFT-LCD.

    PubMed

    Lin, Chin-Chiuan; Huang, Kuo-Chen

    2009-10-01

    An empirical study was carried out to examine the effects of color combination and ambient illumination on visual perception time using TFT-LCD. The effect of color combination was broken down into two subfactors, luminance contrast ratio and chromaticity contrast. Analysis indicated that the luminance contrast ratio and ambient illumination had significant, though small effects on visual perception. Visual perception time was better at high luminance contrast ratio than at low luminance contrast ratio. Visual perception time under normal ambient illumination was better than at other ambient illumination levels, although the stimulus color had a confounding effect on visual perception time. In general, visual perception time was better for the primary colors than the middle-point colors. Based on the results, normal ambient illumination level and high luminance contrast ratio seemed to be the optimal choice for design of workplace with video display terminals TFT-LCD.

  18. [Peculiarities of visual perception of dentition and smile aesthetic parameters].

    PubMed

    Riakhovskiĭ, A N; Usanova, E V

    2007-01-01

    As the result of the studies it was determined in which limits the dentition central line displacement from the face middle line and the change of smile line tilt angle become noticeable for visual perception. And also how much visual perception of the dentition aesthetic parameters were differed in doctors with different experience, dental technicians and patients.

  19. Auditory-visual fusion in speech perception in children with cochlear implants

    PubMed Central

    Schorr, Efrat A.; Fox, Nathan A.; van Wassenhove, Virginie; Knudsen, Eric I.

    2005-01-01

    Speech, for most of us, is a bimodal percept whenever we both hear the voice and see the lip movements of a speaker. Children who are born deaf never have this bimodal experience. We tested children who had been deaf from birth and who subsequently received cochlear implants for their ability to fuse the auditory information provided by their implants with visual information about lip movements for speech perception. For most of the children with implants (92%), perception was dominated by vision when visual and auditory speech information conflicted. For some, bimodal fusion was strong and consistent, demonstrating a remarkable plasticity in their ability to form auditory-visual associations despite the atypical stimulation provided by implants. The likelihood of consistent auditory-visual fusion declined with age at implant beyond 2.5 years, suggesting a sensitive period for bimodal integration in speech perception. PMID:16339316

  20. Living Large: Affect Amplification in Visual Perception Predicts Emotional Reactivity to Events in Daily Life

    PubMed Central

    Palder, Spencer L.; Ode, Scott; Liu, Tianwei; Robinson, Michael D.

    2012-01-01

    It was hypothesized that affect-amplifying individuals would be more reactive to affective events in daily life. Affect amplification was quantified in terms of overestimating the font size of positive and negative, relative to neutral, words in a basic perception task. Subsequently, the same (N = 70) individuals completed a daily diary protocol in which they reported on levels of daily stressors, provocations, and social support as well as six emotion-related outcomes for 14 consecutive days. Individual differences in affect amplification moderated reactivity to daily affective events in all such analyses. For example, daily stressor levels predicted cognitive failures at high, but not low, levels of affect amplification. Affect amplification, then, appears to have widespread utility in understanding individual differences in emotional reactivity. PMID:22989107

  1. Visual enhancing of tactile perception in the posterior parietal cortex.

    PubMed

    Ro, Tony; Wallace, Ruth; Hagedorn, Judith; Farnè, Alessandro; Pienkos, Elizabeth

    2004-01-01

    The visual modality typically dominates over our other senses. Here we show that after inducing an extreme conflict in the left hand between vision of touch (present) and the feeling of touch (absent), sensitivity to touch increases for several minutes after the conflict. Transcranial magnetic stimulation of the posterior parietal cortex after this conflict not only eliminated the enduring visual enhancement of touch, but also impaired normal tactile perception. This latter finding demonstrates a direct role of the parietal lobe in modulating tactile perception as a result of the conflict between these senses. These results provide evidence for visual-to-tactile perceptual modulation and demonstrate effects of illusory vision of touch on touch perception through a long-lasting modulatory process in the posterior parietal cortex.

  2. Odors Bias Time Perception in Visual and Auditory Modalities

    PubMed Central

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  3. Audibility and visual biasing in speech perception

    NASA Astrophysics Data System (ADS)

    Clement, Bart Richard

    Although speech perception has been considered a predominantly auditory phenomenon, large benefits from vision in degraded acoustic conditions suggest integration of audition and vision. More direct evidence of this comes from studies of audiovisual disparity that demonstrate vision can bias and even dominate perception (McGurk & MacDonald, 1976). It has been observed that hearing-impaired listeners demonstrate more visual biasing than normally hearing listeners (Walden et al., 1990). It is argued here that stimulus audibility must be equated across groups before true differences can be established. In the present investigation, effects of visual biasing on perception were examined as audibility was degraded for 12 young normally hearing listeners. Biasing was determined by quantifying the degree to which listener identification functions for a single synthetic auditory /ba-da-ga/ continuum changed across two conditions: (1)an auditory-only listening condition; and (2)an auditory-visual condition in which every item of the continuum was synchronized with visual articulations of the consonant-vowel (CV) tokens /ba/ and /ga/, as spoken by each of two talkers. Audibility was altered by presenting the conditions in quiet and in noise at each of three signal-to- noise (S/N) ratios. For the visual-/ba/ context, large effects of audibility were found. As audibility decreased, visual biasing increased. A large talker effect also was found, with one talker eliciting more biasing than the other. An independent lipreading measure demonstrated that this talker was more visually intelligible than the other. For the visual-/ga/ context, audibility and talker effects were less robust, possibly obscured by strong listener effects, which were characterized by marked differences in perceptual processing patterns among participants. Some demonstrated substantial biasing whereas others demonstrated little, indicating a strong reliance on audition even in severely degraded acoustic

  4. Flow, affect and visual creativity.

    PubMed

    Cseh, Genevieve M; Phillips, Louise H; Pearson, David G

    2015-01-01

    Flow (being in the zone) is purported to have positive consequences in terms of affect and performance; however, there is no empirical evidence about these links in visual creativity. Positive affect often--but inconsistently--facilitates creativity, and both may be linked to experiencing flow. This study aimed to determine relationships between these variables within visual creativity. Participants performed the creative mental synthesis task to simulate the creative process. Affect change (pre- vs. post-task) and flow were measured via questionnaires. The creativity of synthesis drawings was rated objectively and subjectively by judges. Findings empirically demonstrate that flow is related to affect improvement during visual creativity. Affect change was linked to productivity and self-rated creativity, but no other objective or subjective performance measures. Flow was unrelated to all external performance measures but was highly correlated with self-rated creativity; flow may therefore motivate perseverance towards eventual excellence rather than provide direct cognitive enhancement.

  5. Ambiguities and conventions in the perception of visual art.

    PubMed

    Mamassian, Pascal

    2008-09-01

    Vision perception is ambiguous and visual arts play with these ambiguities. While perceptual ambiguities are resolved with prior constraints, artistic ambiguities are resolved by conventions. Is there a relationship between priors and conventions? This review surveys recent work related to these ambiguities in composition, spatial scale, illumination and color, three-dimensional layout, shape, and movement. While most conventions seem to have their roots in perceptual constraints, those conventions that differ from priors may help us appreciate how visual arts differ from everyday perception.

  6. Auditory, visual, and auditory-visual perception of emotions by individuals with cochlear implants, hearing AIDS, and normal hearing.

    PubMed

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.

  7. Public health nurse perceptions of Omaha System data visualization.

    PubMed

    Lee, Seonah; Kim, Era; Monsen, Karen A

    2015-10-01

    Electronic health records (EHRs) provide many benefits related to the storage, deployment, and retrieval of large amounts of patient data. However, EHRs have not fully met the need to reuse data for decision making on follow-up care plans. Visualization offers new ways to present health data, especially in EHRs. Well-designed data visualization allows clinicians to communicate information efficiently and effectively, contributing to improved interpretation of clinical data and better patient care monitoring and decision making. Public health nurse (PHN) perceptions of Omaha System data visualization prototypes for use in EHRs have not been evaluated. To visualize PHN-generated Omaha System data and assess PHN perceptions regarding the visual validity, helpfulness, usefulness, and importance of the visualizations, including interactive functionality. Time-oriented visualization for problems and outcomes and Matrix visualization for problems and interventions were developed using PHN-generated Omaha System data to help PHNs consume data and plan care at the point of care. Eleven PHNs evaluated prototype visualizations. Overall PHNs response to visualizations was positive, and feedback for improvement was provided. This study demonstrated the potential for using visualization techniques within EHRs to summarize Omaha System patient data for clinicians. Further research is needed to improve and refine these visualizations and assess the potential to incorporate visualizations within clinical EHRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Visual Perception of Elevation

    DTIC Science & Technology

    1992-01-20

    by bloe num r ) FIELD oOUP suB.O. ’Spatial localization, Pitch, Roll, Eve level, Visual Localizatiyn1 VPR, VPV, Perception,Focentri(i spatia ca...TIlEIPHONE NUMBER 22c. OFFICE SYMIO. Dr. John Tangney (202) 767-5021 AFOSR/N DO FORM 1473, 83 APR EDITION OF I JAN 73 I OBSOLETE. 19 SECURITY... Schermerhorn Hall New York, NY 10027 20 January 1992 Interim Report for Period 1 January 1991 - 31 December 1991 Unclassified Prepared for :i

  9. Relationships between Fine-Motor, Visual-Motor, and Visual Perception Scores and Handwriting Legibility and Speed

    ERIC Educational Resources Information Center

    Klein, Sheryl; Guiltner, Val; Sollereder, Patti; Cui, Ying

    2011-01-01

    Occupational therapists assess fine motor, visual motor, visual perception, and visual skill development, but knowledge of the relationships between scores on sensorimotor performance measures and handwriting legibility and speed is limited. Ninety-nine students in grades three to six with learning and/or behavior problems completed the Upper-Limb…

  10. Affective state influences perception by affecting decision parameters underlying bias and sensitivity.

    PubMed

    Lynn, Spencer K; Zhang, Xuan; Barrett, Lisa Feldman

    2012-08-01

    Studies of the effect of affect on perception often show consistent directional effects of a person's affective state on perception. Unpleasant emotions have been associated with a "locally focused" style of stimulus evaluation, and positive emotions with a "globally focused" style. Typically, however, studies of affect and perception have not been conducted under the conditions of perceptual uncertainty and behavioral risk inherent to perceptual judgments outside the laboratory. We investigated the influence of perceivers' experienced affect (valence and arousal) on the utility of social threat perception by combining signal detection theory and behavioral economics. We compared 3 perceptual decision environments that systematically differed with respect to factors that underlie uncertainty and risk: the base rate of threat, the costs of incorrect identification threat, and the perceptual similarity of threats and nonthreats. We found that no single affective state yielded the best performance on the threat perception task across the 3 environments. Unpleasant valence promoted calibration of response bias to base rate and costs, high arousal promoted calibration of perceptual sensitivity to perceptual similarity, and low arousal was associated with an optimal adjustment of bias to sensitivity. However, the strength of these associations was conditional upon the difficulty of attaining optimal bias and high sensitivity, such that the effect of the perceiver's affective state on perception differed with the cause and/or level of uncertainty and risk.

  11. Top-down control of visual perception: attention in natural vision.

    PubMed

    Rolls, Edmund T

    2008-01-01

    Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.

  12. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  13. Dynamic Stimuli And Active Processing In Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  14. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    PubMed

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Visual perception and regulatory conflict: motivation and physiology influence distance perception.

    PubMed

    Cole, Shana; Balcetis, Emily; Zhang, Sam

    2013-02-01

    Regulatory conflict can emerge when people experience a strong motivation to act on goals but a conflicting inclination to withhold action because physical resources available, or physiological potentials, are low. This study demonstrated that distance perception is biased in ways that theory suggests assists in managing this conflict. Participants estimated the distance to a target location. Individual differences in physiological potential measured via waist-to-hip ratio interacted with manipulated motivational states to predict visual perception. Among people low in physiological potential and likely to experience regulatory conflict, the environment appeared easier to traverse when motivation was strong compared with weak. Among people high in potential and less likely to experience conflict, perception was not predicted by motivational strength. The role of motivated distance perception in self-regulation is discussed. 2013 APA, all rights reserved

  16. Affective State Influences Perception by Affecting Decision Parameters Underlying Bias and Sensitivity

    PubMed Central

    Lynn, Spencer K.; Zhang, Xuan; Barrett, Lisa Feldman

    2012-01-01

    Studies of the effect of affect on perception often show consistent directional effects of a person’s affective state on perception. Unpleasant emotions have been associated with a “locally focused” style of stimulus evaluation, and positive emotions with a “globally focused” style. Typically, however, studies of affect and perception have not been conducted under the conditions of perceptual uncertainty and behavioral risk inherent to perceptual judgments outside the laboratory. We investigated the influence of perceivers’ experience affect (valence and arousal) on the utility of social threat perception by combining signal detection theory and behavioral economics. We created three perceptual decision environments that systematically differed with respect to factors that underlie uncertainty and risk: the base rate of threat, the costs of incorrect identification threat, and the perceptual similarity of threats and non-threats. We found that no single affective state yielded the best performance on the threat perception task across the three environments. Unpleasant valence promoted calibration of response bias to base rate and costs, high arousal promoted calibration of perceptual sensitivity to perceptual similarity, and low arousal was associated with an optimal adjustment of bias to sensitivity. However, the strength of these associations was conditional upon the difficulty of attaining optimal bias and high sensitivity, such that the effect of the perceiver’s affective state on perception differed with the cause and/or level of uncertainty and risk. PMID:22251054

  17. Neuron analysis of visual perception

    NASA Technical Reports Server (NTRS)

    Chow, K. L.

    1980-01-01

    The receptive fields of single cells in the visual system of cat and squirrel monkey were studied investigating the vestibular input affecting the cells, and the cell's responses during visual discrimination learning process. The receptive field characteristics of the rabbit visual system, its normal development, its abnormal development following visual deprivation, and on the structural and functional re-organization of the visual system following neo-natal and prenatal surgery were also studied. The results of each individual part of each investigation are detailed.

  18. Modeling visual clutter perception using proto-object segmentation

    PubMed Central

    Yu, Chen-Ping; Samaras, Dimitris; Zelinsky, Gregory J.

    2014-01-01

    We introduce the proto-object model of visual clutter perception. This unsupervised model segments an image into superpixels, then merges neighboring superpixels that share a common color cluster to obtain proto-objects—defined here as spatially extended regions of coherent features. Clutter is estimated by simply counting the number of proto-objects. We tested this model using 90 images of realistic scenes that were ranked by observers from least to most cluttered. Comparing this behaviorally obtained ranking to a ranking based on the model clutter estimates, we found a significant correlation between the two (Spearman's ρ = 0.814, p < 0.001). We also found that the proto-object model was highly robust to changes in its parameters and was generalizable to unseen images. We compared the proto-object model to six other models of clutter perception and demonstrated that it outperformed each, in some cases dramatically. Importantly, we also showed that the proto-object model was a better predictor of clutter perception than an actual count of the number of objects in the scenes, suggesting that the set size of a scene may be better described by proto-objects than objects. We conclude that the success of the proto-object model is due in part to its use of an intermediate level of visual representation—one between features and objects—and that this is evidence for the potential importance of a proto-object representation in many common visual percepts and tasks. PMID:24904121

  19. Expectation affects verbal judgments but not reaches to visually perceived egocentric distances.

    PubMed

    Pagano, Christopher C; Isenhower, Robert W

    2008-04-01

    Two response measures for reporting visually perceived egocentric distances-verbal judgments and blind manual reaches-were compared using a within-trial methodology. The expected range of possible target distances was manipulated by instructing the subjects that the targets would be between .50 and 1.00 of their maximum arm reach in one session and between .25 and .90 in another session. The actual range of target distances was always .50-.90. Verbal responses varied as a function of the range of expected distances, whereas simultaneous reaches remained unaffected. These results suggest that verbal responses are subject to a cognitive influence that does not affect actions. It is suggested that action responses are indicative of absolute perception, whereas cognitive responses may reflect only relative perception. The results also indicate that the dependant variable utilized for the study of depth perception will influence the obtained results.

  20. Analyzing the Reading Skills and Visual Perception Levels of First Grade Students

    ERIC Educational Resources Information Center

    Çayir, Aybala

    2017-01-01

    The purpose of this study was to analyze primary school first grade students' reading levels and correlate their visual perception skills. For this purpose, students' reading speed, reading comprehension and reading errors were determined using The Informal Reading Inventory. Students' visual perception levels were also analyzed using…

  1. PERCEPT Indoor Navigation System for the Blind and Visually Impaired: Architecture and Experimentation

    PubMed Central

    Ganz, Aura; Schafer, James; Gandhi, Siddhesh; Puleo, Elaine; Wilson, Carole; Robertson, Meg

    2012-01-01

    We introduce PERCEPT system, an indoor navigation system for the blind and visually impaired. PERCEPT will improve the quality of life and health of the visually impaired community by enabling independent living. Using PERCEPT, blind users will have independent access to public health facilities such as clinics, hospitals, and wellness centers. Access to healthcare facilities is crucial for this population due to the multiple health conditions that they face such as diabetes and its complications. PERCEPT system trials with 24 blind and visually impaired users in a multistory building show PERCEPT system effectiveness in providing appropriate navigation instructions to these users. The uniqueness of our system is that it is affordable and that its design follows orientation and mobility principles. We hope that PERCEPT will become a standard deployed in all indoor public spaces, especially in healthcare and wellness facilities. PMID:23316225

  2. Developmental visual perception deficits with no indications of prosopagnosia in a child with abnormal eye movements.

    PubMed

    Gilaie-Dotan, Sharon; Doron, Ravid

    2017-06-01

    Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Micro-Valences: Perceiving Affective Valence in Everyday Objects

    PubMed Central

    Lebrecht, Sophie; Bar, Moshe; Barrett, Lisa Feldman; Tarr, Michael J.

    2012-01-01

    Perceiving the affective valence of objects influences how we think about and react to the world around us. Conversely, the speed and quality with which we visually recognize objects in a visual scene can vary dramatically depending on that scene’s affective content. Although typical visual scenes contain mostly “everyday” objects, the affect perception in visual objects has been studied using somewhat atypical stimuli with strong affective valences (e.g., guns or roses). Here we explore whether affective valence must be strong or overt to exert an effect on our visual perception. We conclude that everyday objects carry subtle affective valences – “micro-valences” – which are intrinsic to their perceptual representation. PMID:22529828

  4. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness

    PubMed Central

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain. PMID:27023274

  5. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness.

    PubMed

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d') and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.

  6. A Critical Review of the "Motor-Free Visual Perception Test-Fourth Edition" (MVPT-4)

    ERIC Educational Resources Information Center

    Brown, Ted; Peres, Lisa

    2018-01-01

    The "Motor-Free Visual Perception Test-fourth edition" (MVPT-4) is a revised version of the "Motor-Free Visual Perception Test-third edition." The MVPT-4 is used to assess the visual-perceptual ability of individuals aged 4.0 through 80+ years via a series of visual-perceptual tasks that do not require a motor response. Test…

  7. Visual attention to and perception of undamaged and damaged versions of natural and colored female hair.

    PubMed

    Fink, Bernhard; Neuser, Frauke; Deloux, Gwenelle; Röder, Susanne; Matts, Paul J

    2013-03-01

    Female hair color is thought to influence physical attractiveness, and although there is some evidence for this assertion, research has yet not addressed the question if and how physical damaging affects the perception of female hair color. Here we investigate whether people are sensitive (in terms of visual attention and age, health and attractiveness perception) to subtle differences in hair images of natural and colored hair before and after physical damaging. We tracked the eye-gaze of 50 men and 50 women aged 31-50 years whilst they viewed randomized pairs of images of 20 natural and 20 colored hair tresses, each pair displaying the same tress before and after controlled cuticle damage. The hair images were then rated for perceived health, attractiveness, and age. Undamaged versions of natural and colored hair were perceived as significantly younger, healthier, and more attractive than corresponding damaged versions. Visual attention to images of undamaged colored hair was significantly higher compared with their damaged counterparts, while in natural hair, the opposite pattern was found. We argue that the divergence in visual attention to undamaged colored female hair and damaged natural female hair and associated ratings is due to differences in social perception and discuss the source of apparent visual difference between undamaged and damaged hair. © 2013 Wiley Periodicals, Inc.

  8. Gestalt perception modulates early visual processing.

    PubMed

    Herrmann, C S; Bosch, V

    2001-04-17

    We examined whether early visual processing reflects perceptual properties of a stimulus in addition to physical features. We recorded event-related potentials (ERPs) of 13 subjects in a visual classification task. We used four different stimuli which were all composed of four identical elements. One of the stimuli constituted an illusory Kanizsa square, another was composed of the same number of collinear line segments but the elements did not form a Gestalt. In addition, a target and a control stimulus were used which were arranged differently. These stimuli allow us to differentiate the processing of colinear line elements (stimulus features) and illusory figures (perceptual properties). The visual N170 in response to the illusory figure was significantly larger as compared to the other collinear stimulus. This is taken to indicate that the visual N170 reflects cognitive processes of Gestalt perception in addition to attentional processes and physical stimulus properties.

  9. Audio-Visual Speech Perception Is Special

    ERIC Educational Resources Information Center

    Tuomainen, J.; Andersen, T.S.; Tiippana, K.; Sams, M.

    2005-01-01

    In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and…

  10. Perception of the dynamic visual vertical during sinusoidal linear motion.

    PubMed

    Pomante, A; Selen, L P J; Medendorp, W P

    2017-10-01

    The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the

  11. Perception of the Auditory-Visual Illusion in Speech Perception by Children with Phonological Disorders

    ERIC Educational Resources Information Center

    Dodd, Barbara; McIntosh, Beth; Erdener, Dogu; Burnham, Denis

    2008-01-01

    An example of the auditory-visual illusion in speech perception, first described by McGurk and MacDonald, is the perception of [ta] when listeners hear [pa] in synchrony with the lip movements for [ka]. One account of the illusion is that lip-read and heard speech are combined in an articulatory code since people who mispronounce words respond…

  12. Experience, Context, and the Visual Perception of Human Movement

    ERIC Educational Resources Information Center

    Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie

    2004-01-01

    Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…

  13. Visual perception of fatigued lifting actions.

    PubMed

    Fischer, Steven L; Albert, Wayne J; McGarry, Tim

    2012-12-01

    Fatigue-related changes in lifting kinematics may expose workers to undue injury risks. Early detection of accumulating fatigue offers the prospect of intervention strategies to mitigate such fatigue-related risks. In a first step towards this objective, this study investigated whether fatigue detection was accessible to visual perception and, if so, what was the key visual information required for successful fatigue discrimination. Eighteen participants were tasked with identifying fatigued lifts when viewing 24 trials presented using both video and point-light representations. Each trial comprised a pair of lifting actions containing a fresh and a fatigued lift from the same individual presented in counter-balanced sequence. Confidence intervals demonstrated that the frequency of correct responses for both sexes exceeded chance expectations (50%) for both video (68%±12%) and point-light representations (67%±10%), demonstrating that fatigued lifting kinematics are open to visual perception. There were no significant differences between sexes or viewing condition, the latter result indicating kinematic dynamics as providing sufficient information for successful fatigue discrimination. Moreover, results from single viewer investigation reported fatigue detection (75%) from point-light information describing only the kinematics of the box lifted. These preliminary findings may have important workplace applications if fatigue discrimination rates can be improved upon through future research. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. A Dynamic Systems Theory Model of Visual Perception Development

    ERIC Educational Resources Information Center

    Coté, Carol A.

    2015-01-01

    This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…

  15. Anxiety affects the amplitudes of red and green color-elicited flash visual evoked potentials in humans.

    PubMed

    Hosono, Yuki; Kitaoka, Kazuyoshi; Urushihara, Ryo; Séi, Hiroyoshi; Kinouchi, Yohsuke

    2014-01-01

    It has been reported that negative emotional changes and conditions affect the visual faculties of humans at the neural level. On the other hand, the effects of emotion on color perception in particular, which are based on evoked potentials, are unknown. In the present study, we investigated whether different anxiety levels affect the color information processing for each of 3 wavelengths by using flash visual evoked potentials (FVEPs) and State-Trait Anxiety Inventory. In results, significant positive correlations were observed between FVEP amplitudes and state or trait anxiety scores in the long (sensed as red) and middle (sensed as green) wavelengths. On the other hand, short-wavelength-evoked FVEPs were not correlated with anxiety level. Our results suggest that negative emotional conditions may affect color sense processing in humans.

  16. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  17. Working memory can enhance unconscious visual perception.

    PubMed

    Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying

    2012-06-01

    We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.

  18. Prefrontal cortex modulates posterior alpha oscillations during top-down guided visual perception

    PubMed Central

    Helfrich, Randolph F.; Huang, Melody; Wilson, Guy; Knight, Robert T.

    2017-01-01

    Conscious visual perception is proposed to arise from the selective synchronization of functionally specialized but widely distributed cortical areas. It has been suggested that different frequency bands index distinct canonical computations. Here, we probed visual perception on a fine-grained temporal scale to study the oscillatory dynamics supporting prefrontal-dependent sensory processing. We tested whether a predictive context that was embedded in a rapid visual stream modulated the perception of a subsequent near-threshold target. The rapid stream was presented either rhythmically at 10 Hz, to entrain parietooccipital alpha oscillations, or arrhythmically. We identified a 2- to 4-Hz delta signature that modulated posterior alpha activity and behavior during predictive trials. Importantly, delta-mediated top-down control diminished the behavioral effects of bottom-up alpha entrainment. Simultaneous source-reconstructed EEG and cross-frequency directionality analyses revealed that this delta activity originated from prefrontal areas and modulated posterior alpha power. Taken together, this study presents converging behavioral and electrophysiological evidence for frontal delta-mediated top-down control of posterior alpha activity, selectively facilitating visual perception. PMID:28808023

  19. Neocortical Rebound Depolarization Enhances Visual Perception

    PubMed Central

    Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji

    2015-01-01

    Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866

  20. The role of vision in auditory distance perception.

    PubMed

    Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro

    2012-01-01

    In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.

  1. Visual-motor integration, visual perception, and fine motor coordination in a population of children with high levels of Fetal Alcohol Spectrum Disorder.

    PubMed

    Doney, Robyn; Lucas, Barbara R; Watkins, Rochelle E; Tsang, Tracey W; Sauer, Kay; Howat, Peter; Latimer, Jane; Fitzpatrick, James P; Oscar, June; Carter, Maureen; Elliott, Elizabeth J

    2016-08-01

    Visual-motor integration (VMI) skills are essential for successful academic performance, but to date no studies have assessed these skills in a population-based cohort of Australian Aboriginal children who, like many children in other remote, disadvantaged communities, consistently underperform academically. Furthermore, many children in remote areas of Australia have prenatal alcohol exposure (PAE) and Fetal Alcohol Spectrum Disorder (FASD), which are often associated with VMI deficits. VMI, visual perception, and fine motor coordination were assessed using The Beery-Buktenica Developmental Test of Visual-Motor Integration, including its associated subtests of Visual Perception and Fine Motor Coordination, in a cohort of predominantly Australian Aboriginal children (7.5-9.6 years, n=108) in remote Western Australia to explore whether PAE adversely affected test performance. Cohort results were reported, and comparisons made between children i) without PAE; ii) with PAE (no FASD); and iii) FASD. The prevalence of moderate (≤16th percentile) and severe (≤2nd percentile) impairment was established. Mean VMI scores were 'below average' (M=87.8±9.6), and visual perception scores were 'average' (M=97.6±12.5), with no differences between groups. Few children had severe VMI impairment (1.9%), but moderate impairment rates were high (47.2%). Children with FASD had significantly lower fine motor coordination scores and higher moderate impairment rates (M=87.9±12.5; 66.7%) than children without PAE (M=95.1±10.7; 23.3%) and PAE (no FASD) (M=96.1±10.9; 15.4%). Aboriginal children living in remote Western Australia have poor VMI skills regardless of PAE or FASD. Children with FASD additionally had fine motor coordination problems. VMI and fine motor coordination should be assessed in children with PAE, and included in FASD diagnostic assessments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Object perception is selectively slowed by a visually similar working memory load.

    PubMed

    Robinson, Alan; Manzi, Alberto; Triesch, Jochen

    2008-12-22

    The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.

  3. Enhanced visual awareness for morality and pajamas? Perception vs. memory in 'top-down' effects.

    PubMed

    Firestone, Chaz; Scholl, Brian J

    2015-03-01

    A raft of prominent findings has revived the notion that higher-level cognitive factors such as desire, meaning, and moral relevance can directly affect what we see. For example, under conditions of brief presentation, morally relevant words reportedly "pop out" and are easier to identify than morally irrelevant words. Though such results purport to show that perception itself is sensitive to such factors, much of this research instead demonstrates effects on visual recognition--which necessarily involves not only visual processing per se, but also memory retrieval. Here we report three experiments which suggest that many alleged top-down effects of this sort are actually effects on 'back-end' memory rather than 'front-end' perception. In particular, the same methods used to demonstrate popout effects for supposedly privileged stimuli (such as morality-related words, e.g. "punishment" and "victim") also yield popout effects for unmotivated, superficial categories (such as fashion-related words, e.g. "pajamas" and "stiletto"). We conclude that such effects reduce to well-known memory processes (in this case, semantic priming) that do not involve morality, and have no implications for debates about whether higher-level factors influence perception. These case studies illustrate how it is critical to distinguish perception from memory in alleged 'top-down' effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Influence of visual path information on human heading perception during rotation.

    PubMed

    Li, Li; Chen, Jing; Peng, Xiaozhe

    2009-03-31

    How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.

  5. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  6. How do musical tonality and experience affect visual working memory?

    PubMed

    Yang, Hua; Lu, Jing; Gong, Diankun; Yao, Dezhong

    2016-01-20

    The influence of music on the human brain has continued to attract increasing attention from neuroscientists and musicologists. Currently, tonal music is widely present in people's daily lives; however, atonal music has gradually become an important part of modern music. In this study, we conducted two experiments: the first one tested for differences in perception of distractibility between tonal music and atonal music. The second experiment tested how tonal music and atonal music affect visual working memory by comparing musicians and nonmusicians who were placed in contexts with background tonal music, atonal music, and silence. They were instructed to complete a delay matching memory task. The results show that musicians and nonmusicians have different evaluations of the distractibility of tonal music and atonal music, possibly indicating that long-term training may lead to a higher auditory perception threshold among musicians. For the working memory task, musicians reacted faster than nonmusicians in all background music cases, and musicians took more time to respond in the tonal background music condition than in the other conditions. Therefore, our results suggest that for a visual memory task, background tonal music may occupy more cognitive resources than atonal music or silence for musicians, leaving few resources left for the memory task. Moreover, the musicians outperformed the nonmusicians because of the higher sensitivity to background music, which also needs a further longitudinal study to be confirmed.

  7. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  8. Visual exposure to large and small portion sizes and perceptions of portion size normality: Three experimental studies.

    PubMed

    Robinson, Eric; Oldham, Melissa; Cuckson, Imogen; Brunstrom, Jeffrey M; Rogers, Peter J; Hardman, Charlotte A

    2016-03-01

    Portion sizes of many foods have increased in recent times. In three studies we examined the effect that repeated visual exposure to larger versus smaller food portion sizes has on perceptions of what constitutes a normal-sized food portion and measures of portion size selection. In studies 1 and 2 participants were visually exposed to images of large or small portions of spaghetti bolognese, before making evaluations about an image of an intermediate sized portion of the same food. In study 3 participants were exposed to images of large or small portions of a snack food before selecting a portion size of snack food to consume. Across the three studies, visual exposure to larger as opposed to smaller portion sizes resulted in participants considering a normal portion of food to be larger than a reference intermediate sized portion. In studies 1 and 2 visual exposure to larger portion sizes also increased the size of self-reported ideal meal size. In study 3 visual exposure to larger portion sizes of a snack food did not affect how much of that food participants subsequently served themselves and ate. Visual exposure to larger portion sizes may adjust visual perceptions of what constitutes a 'normal' sized portion. However, we did not find evidence that visual exposure to larger portions altered snack food intake. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Ventral aspect of the visual form pathway is not critical for the perception of biological motion

    PubMed Central

    Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene

    2015-01-01

    Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504

  10. Visual perception affected by motivation and alertness controlled by a noninvasive brain-computer interface.

    PubMed

    Maksimenko, Vladimir A; Runnova, Anastasia E; Zhuravlev, Maksim O; Makarov, Vladimir V; Nedayvozov, Vladimir; Grubov, Vadim V; Pchelintceva, Svetlana V; Hramov, Alexander E; Pisarchik, Alexander N

    2017-01-01

    The influence of motivation and alertness on brain activity associated with visual perception was studied experimentally using the Necker cube, which ambiguity was controlled by the contrast of its ribs. The wavelet analysis of recorded multichannel electroencephalograms (EEG) allowed us to distinguish two different scenarios while the brain processed the ambiguous stimulus. The first scenario is characterized by a particular destruction of alpha rhythm (8-12 Hz) with a simultaneous increase in beta-wave activity (20-30 Hz), whereas in the second scenario, the beta rhythm is not well pronounced while the alpha-wave energy remains unchanged. The experiments were carried out with a group of financially motivated subjects and another group of unpaid volunteers. It was found that the first scenario occurred mainly in the motivated group. This can be explained by the increased alertness of the motivated subjects. The prevalence of the first scenario was also observed in a group of subjects to whom images with higher ambiguity were presented. We believe that the revealed scenarios can occur not only during the perception of bistable images, but also in other perceptual tasks requiring decision making. The obtained results may have important applications for monitoring and controlling human alertness in situations which need substantial attention. On the base of the obtained results we built a brain-computer interface to estimate and control the degree of alertness in real time.

  11. Enhanced visual acuity and image perception following correction of highly aberrated eyes using an adaptive optics visual simulator.

    PubMed

    Rocha, Karolinne Maia; Vabre, Laurent; Chateau, Nicolas; Krueger, Ronald R

    2010-01-01

    To evaluate the changes in visual acuity and visual perception generated by correcting higher order aberrations in highly aberrated eyes using a large-stroke adaptive optics visual simulator. A crx1 Adaptive Optics Visual Simulator (Imagine Eyes) was used to correct and modify the wavefront aberrations in 12 keratoconic eyes and 8 symptomatic postoperative refractive surgery (LASIK) eyes. After measuring ocular aberrations, the device was programmed to compensate for the eye's wavefront error from the second order to the fifth order (6-mm pupil). Visual acuity was assessed through the adaptive optics system using computer-generated ETDRS opto-types and the Freiburg Visual Acuity and Contrast Test. Mean higher order aberration root-mean-square (RMS) errors in the keratoconus and symptomatic LASIK eyes were 1.88+/-0.99 microm and 1.62+/-0.79 microm (6-mm pupil), respectively. The visual simulator correction of the higher order aberrations present in the keratoconus eyes improved their visual acuity by a mean of 2 lines when compared to their best spherocylinder correction (mean decimal visual acuity with spherocylindrical correction was 0.31+/-0.18 and improved to 0.44+/-0.23 with higher order aberration correction). In the symptomatic LASIK eyes, the mean decimal visual acuity with spherocylindrical correction improved from 0.54+/-0.16 to 0.71+/-0.13 with higher order aberration correction. The visual perception of ETDRS letters was improved when correcting higher order aberrations. The adaptive optics visual simulator can effectively measure and compensate for higher order aberrations (second to fifth order), which are associated with diminished visual acuity and perception in highly aberrated eyes. The adaptive optics technology may be of clinical benefit when counseling patients with highly aberrated eyes regarding their maximum subjective potential for vision correction. Copyright 2010, SLACK Incorporated.

  12. Continued use of an interactive computer game-based visual perception learning system in children with developmental delay.

    PubMed

    Lin, Hsien-Cheng; Chiu, Yu-Hsien; Chen, Yenming J; Wuang, Yee-Pay; Chen, Chiu-Ping; Wang, Chih-Chung; Huang, Chien-Ling; Wu, Tang-Meng; Ho, Wen-Hsien

    2017-11-01

    This study developed an interactive computer game-based visual perception learning system for special education children with developmental delay. To investigate whether perceived interactivity affects continued use of the system, this study developed a theoretical model of the process in which learners decide whether to continue using an interactive computer game-based visual perception learning system. The technology acceptance model, which considers perceived ease of use, perceived usefulness, and perceived playfulness, was extended by integrating perceived interaction (i.e., learner-instructor interaction and learner-system interaction) and then analyzing the effects of these perceptions on satisfaction and continued use. Data were collected from 150 participants (rehabilitation therapists, medical paraprofessionals, and parents of children with developmental delay) recruited from a single medical center in Taiwan. Structural equation modeling and partial-least-squares techniques were used to evaluate relationships within the model. The modeling results indicated that both perceived ease of use and perceived usefulness were positively associated with both learner-instructor interaction and learner-system interaction. However, perceived playfulness only had a positive association with learner-system interaction and not with learner-instructor interaction. Moreover, satisfaction was positively affected by perceived ease of use, perceived usefulness, and perceived playfulness. Thus, satisfaction positively affects continued use of the system. The data obtained by this study can be applied by researchers, designers of computer game-based learning systems, special education workers, and medical professionals. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Body ownership promotes visual awareness.

    PubMed

    van der Hoort, Björn; Reingardt, Maria; Ehrsson, H Henrik

    2017-08-17

    The sense of ownership of one's body is important for survival, e.g., in defending the body against a threat. However, in addition to affecting behavior, it also affects perception of the world. In the case of visuospatial perception, it has been shown that the sense of ownership causes external space to be perceptually scaled according to the size of the body. Here, we investigated the effect of ownership on another fundamental aspect of visual perception: visual awareness. In two binocular rivalry experiments, we manipulated the sense of ownership of a stranger's hand through visuotactile stimulation while that hand was one of the rival stimuli. The results show that ownership, but not mere visuotactile stimulation, increases the dominance of the hand percept. This effect is due to a combination of longer perceptual dominance durations and shorter suppression durations. Together, these results suggest that the sense of body ownership promotes visual awareness.

  14. Suggested Activities to Use With Children Who Present Symptoms of Visual Perception Problems, Elementary Level.

    ERIC Educational Resources Information Center

    Washington County Public Schools, Washington, PA.

    Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…

  15. The Perceptual Root of Object-Based Storage: An Interactive Model of Perception and Visual Working Memory

    ERIC Educational Resources Information Center

    Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei

    2011-01-01

    Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…

  16. Aesthetic perception of visual textures: a holistic exploration using texture analysis, psychological experiment, and perception modeling.

    PubMed

    Liu, Jianli; Lughofer, Edwin; Zeng, Xianyi

    2015-01-01

    Modeling human aesthetic perception of visual textures is important and valuable in numerous industrial domains, such as product design, architectural design, and decoration. Based on results from a semantic differential rating experiment, we modeled the relationship between low-level basic texture features and aesthetic properties involved in human aesthetic texture perception. First, we compute basic texture features from textural images using four classical methods. These features are neutral, objective, and independent of the socio-cultural context of the visual textures. Then, we conduct a semantic differential rating experiment to collect from evaluators their aesthetic perceptions of selected textural stimuli. In semantic differential rating experiment, eights pairs of aesthetic properties are chosen, which are strongly related to the socio-cultural context of the selected textures and to human emotions. They are easily understood and connected to everyday life. We propose a hierarchical feed-forward layer model of aesthetic texture perception and assign 8 pairs of aesthetic properties to different layers. Finally, we describe the generation of multiple linear and non-linear regression models for aesthetic prediction by taking dimensionality-reduced texture features and aesthetic properties of visual textures as dependent and independent variables, respectively. Our experimental results indicate that the relationships between each layer and its neighbors in the hierarchical feed-forward layer model of aesthetic texture perception can be fitted well by linear functions, and the models thus generated can successfully bridge the gap between computational texture features and aesthetic texture properties.

  17. Perceptual geometry of space and form: visual perception of natural scenes and their virtual representation

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.

    2001-11-01

    Perceptual geometry is an emerging field of interdisciplinary research whose objectives focus on study of geometry from the perspective of visual perception, and in turn, apply such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form and space are among fundamental problems in vision science. In recent cognitive and computational models of human perception, natural scenes are used systematically as preferred visual stimuli. Among key problems in perception of form and space, we have examined perception of geometry of natural surfaces and curves, e.g. as in the observer's environment. Besides a systematic mathematical foundation for a remarkably general framework, the advantages of the Gestalt theory of natural surfaces include a concrete computational approach to simulate or recreate images whose geometric invariants and quantities might be perceived and estimated by an observer. The latter is at the very foundation of understanding the nature of perception of space and form, and the (computer graphics) problem of rendering scenes to visually invoke virtual presence.

  18. Auditory enhancement of visual perception at threshold depends on visual abilities.

    PubMed

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Adaptation to visual or auditory time intervals modulates the perception of visual apparent motion

    PubMed Central

    Zhang, Huihui; Chen, Lihan; Zhou, Xiaolin

    2012-01-01

    It is debated whether sub-second timing is subserved by a centralized mechanism or by the intrinsic properties of task-related neural activity in specific modalities (Ivry and Schlerf, 2008). By using a temporal adaptation task, we investigated whether adapting to different time intervals conveyed through stimuli in different modalities (i.e., frames of a visual Ternus display, visual blinking discs, or auditory beeps) would affect the subsequent implicit perception of visual timing, i.e., inter-stimulus interval (ISI) between two frames in a Ternus display. The Ternus display can induce two percepts of apparent motion (AM), depending on the ISI between the two frames: “element motion” for short ISIs, in which the endmost disc is seen as moving back and forth while the middle disc at the overlapping or central position remains stationary; “group motion” for longer ISIs, in which both discs appear to move in a manner of lateral displacement as a whole. In Experiment 1, participants adapted to either the typical “element motion” (ISI = 50 ms) or the typical “group motion” (ISI = 200 ms). In Experiments 2 and 3, participants adapted to a time interval of 50 or 200 ms through observing a series of two paired blinking discs at the center of the screen (Experiment 2) or hearing a sequence of two paired beeps (with pitch 1000 Hz). In Experiment 4, participants adapted to sequences of paired beeps with either low pitches (500 Hz) or high pitches (5000 Hz). After adaptation in each trial, participants were presented with a Ternus probe in which the ISI between the two frames was equal to the transitional threshold of the two types of motions, as determined by a pretest. Results showed that adapting to the short time interval in all the situations led to more reports of “group motion” in the subsequent Ternus probes; adapting to the long time interval, however, caused no aftereffect for visual adaptation but significantly more reports of group motion for

  20. Long-Lasting Enhancement of Visual Perception with Repetitive Noninvasive Transcranial Direct Current Stimulation

    PubMed Central

    Behrens, Janina R.; Kraft, Antje; Irlbacher, Kerstin; Gerhardt, Holger; Olma, Manuel C.; Brandt, Stephan A.

    2017-01-01

    Understanding processes performed by an intact visual cortex as the basis for developing methods that enhance or restore visual perception is of great interest to both researchers and medical practitioners. Here, we explore whether contrast sensitivity, a main function of the primary visual cortex (V1), can be improved in healthy subjects by repetitive, noninvasive anodal transcranial direct current stimulation (tDCS). Contrast perception was measured via threshold perimetry directly before and after intervention (tDCS or sham stimulation) on each day over 5 consecutive days (24 subjects, double-blind study). tDCS improved contrast sensitivity from the second day onwards, with significant effects lasting 24 h. After the last stimulation on day 5, the anodal group showed a significantly greater improvement in contrast perception than the sham group (23 vs. 5%). We found significant long-term effects in only the central 2–4° of the visual field 4 weeks after the last stimulation. We suspect a combination of two factors contributes to these lasting effects. First, the V1 area that represents the central retina was located closer to the polarization electrode, resulting in higher current density. Second, the central visual field is represented by a larger cortical area relative to the peripheral visual field (cortical magnification). This is the first study showing that tDCS over V1 enhances contrast perception in healthy subjects for several weeks. This study contributes to the investigation of the causal relationship between the external modulation of neuronal membrane potential and behavior (in our case, visual perception). Because the vast majority of human studies only show temporary effects after single tDCS sessions targeting the visual system, our study underpins the potential for lasting effects of repetitive tDCS-induced modulation of neuronal excitability. PMID:28860969

  1. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  2. Seen, Unseen or Overlooked? How Can Visual Perception Develop through a Multimodal Enquiry?

    ERIC Educational Resources Information Center

    Payne, Rachel

    2012-01-01

    This article outlines an exploration into the development of visual perception through analysing the process of taking photographs of the mundane as small-scale research. A preoccupation with social construction of the visual lies at the heart of the investigation by correlating the perceptive process to Mitchell's (2002) counter thesis for visual…

  3. Producing Curious Affects: Visual Methodology as an Affecting and Conflictual Wunderkammer

    ERIC Educational Resources Information Center

    Staunaes, Dorthe; Kofoed, Jette

    2015-01-01

    Digital video cameras, smartphones, internet and iPads are increasingly used as visual research methods with the purpose of creating an affective corpus of data. Such visual methods are often combined with interviews or observations. Not only are visual methods part of the used research methods, the visual products are used as requisites in…

  4. Visual perception and frontal lobe in intellectual disabilities: a study with evoked potentials and neuropsychology.

    PubMed

    Muñoz-Ruata, J; Caro-Martínez, E; Martínez Pérez, L; Borja, M

    2010-12-01

    Perception disorders are frequently observed in persons with intellectual disability (ID) and their influence on cognition has been discussed. The objective of this study is to clarify the mechanisms behind these alterations by analysing the visual event related potentials early component, the N1 wave, which is related to perception alterations in several pathologies. Additionally, the relationship between N1 and neuropsychological visual tests was studied with the aim to understand its functional significance in ID persons. A group of 69 subjects, with etiologically heterogeneous mild ID, performed an odd-ball task of active discrimination of geometric figures. N1a (frontal) and N1b (post-occipital) waves were obtained from the evoked potentials. They also performed several neuropsychological tests. Only component N1a, produced by the target stimulus, showed significant correlations with the visual integration, visual semantic association, visual analogical reasoning tests, Perceptual Reasoning Index (Wechsler Intelligence Scale for Children Fourth Edition) and intelligence quotient. The systematic correlations, produced by the target stimulus in perceptual abilities tasks, with the N1a (frontal) and not with N1b (posterior), suggest that the visual perception process involves frontal participation. These correlations support the idea that the N1a and N1b are not equivalent. The relationship between frontal functions and early stages of visual perception is revised and discussed, as well as the frontal contribution with the neuropsychological tests used. A possible relationship between the frontal activity dysfunction in ID and perceptive problems is suggested. Perceptive alteration observed in persons with ID could indeed be because of altered sensory areas, but also to a failure in the frontal participation of perceptive processes conceived as elaborations inside reverberant circuits of perception-action. © 2010 The Authors. Journal of Intellectual Disability

  5. Body ownership promotes visual awareness

    PubMed Central

    Reingardt, Maria; Ehrsson, H Henrik

    2017-01-01

    The sense of ownership of one’s body is important for survival, e.g., in defending the body against a threat. However, in addition to affecting behavior, it also affects perception of the world. In the case of visuospatial perception, it has been shown that the sense of ownership causes external space to be perceptually scaled according to the size of the body. Here, we investigated the effect of ownership on another fundamental aspect of visual perception: visual awareness. In two binocular rivalry experiments, we manipulated the sense of ownership of a stranger’s hand through visuotactile stimulation while that hand was one of the rival stimuli. The results show that ownership, but not mere visuotactile stimulation, increases the dominance of the hand percept. This effect is due to a combination of longer perceptual dominance durations and shorter suppression durations. Together, these results suggest that the sense of body ownership promotes visual awareness. PMID:28826500

  6. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  7. The hippocampus and visual perception

    PubMed Central

    Lee, Andy C. H.; Yeung, Lok-Kin; Barense, Morgan D.

    2012-01-01

    In this review, we will discuss the idea that the hippocampus may be involved in both memory and perception, contrary to theories that posit functional and neuroanatomical segregation of these processes. This suggestion is based on a number of recent neuropsychological and functional neuroimaging studies that have demonstrated that the hippocampus is involved in the visual discrimination of complex spatial scene stimuli. We argue that these findings cannot be explained by long-term memory or working memory processing or, in the case of patient findings, dysfunction beyond the medial temporal lobe (MTL). Instead, these studies point toward a role for the hippocampus in higher-order spatial perception. We suggest that the hippocampus processes complex conjunctions of spatial features, and that it may be more appropriate to consider the representations for which this structure is critical, rather than the cognitive processes that it mediates. PMID:22529794

  8. Impact of Language on Development of Auditory-Visual Speech Perception

    ERIC Educational Resources Information Center

    Sekiyama, Kaoru; Burnham, Denis

    2008-01-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…

  9. Visual perception affected by motivation and alertness controlled by a noninvasive brain-computer interface

    PubMed Central

    Zhuravlev, Maksim O.; Makarov, Vladimir V.; Nedayvozov, Vladimir; Grubov, Vadim V.; Pchelintceva, Svetlana V.; Hramov, Alexander E.

    2017-01-01

    The influence of motivation and alertness on brain activity associated with visual perception was studied experimentally using the Necker cube, which ambiguity was controlled by the contrast of its ribs. The wavelet analysis of recorded multichannel electroencephalograms (EEG) allowed us to distinguish two different scenarios while the brain processed the ambiguous stimulus. The first scenario is characterized by a particular destruction of alpha rhythm (8–12 Hz) with a simultaneous increase in beta-wave activity (20–30 Hz), whereas in the second scenario, the beta rhythm is not well pronounced while the alpha-wave energy remains unchanged. The experiments were carried out with a group of financially motivated subjects and another group of unpaid volunteers. It was found that the first scenario occurred mainly in the motivated group. This can be explained by the increased alertness of the motivated subjects. The prevalence of the first scenario was also observed in a group of subjects to whom images with higher ambiguity were presented. We believe that the revealed scenarios can occur not only during the perception of bistable images, but also in other perceptual tasks requiring decision making. The obtained results may have important applications for monitoring and controlling human alertness in situations which need substantial attention. On the base of the obtained results we built a brain-computer interface to estimate and control the degree of alertness in real time. PMID:29267295

  10. The Developmental Test of Visual Perception-Third Edition (DTVP-3): A Review, Critique, and Practice Implications

    ERIC Educational Resources Information Center

    Brown, Ted; Murdolo, Yuki

    2015-01-01

    The "Developmental Test of Visual Perception-Third Edition" (DTVP-3) is a recent revision of the "Developmental Test of Visual Perception-Second Edition" (DTVP-2). The DTVP-3 is designed to assess the visual perceptual and/or visual-motor integration skills of children from 4 to 12 years of age. The test is standardized using…

  11. Spatial Context and Visual Perception for Action

    ERIC Educational Resources Information Center

    Coello, Yann

    2005-01-01

    In this paper, evidences that visuo-spatial perception in the peri-personal space is not an abstract, disembodied phenomenon but is rather shaped by action constraints are reviewed. Locating a visual target with the intention of reaching it requires that the relevant spatial information is considered in relation with the body-part that will be…

  12. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective.

    PubMed

    Gillebert, Céline R; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T; Orban, Guy A; Vandenberghe, Rik

    2015-09-16

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial temporal system. We applied

  13. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    PubMed Central

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial

  14. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  15. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Attention modulates perception of visual space

    PubMed Central

    Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.

    2017-01-01

    Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198

  17. Depth Perception in Visual Simulation.

    DTIC Science & Technology

    1980-08-01

    effect. Journal of Experimental Psychology, 1953, 45, 205-217. Wallach, H., O’Connell, D. N., & Neisser , U . The memory effect of visual perception of...Wallach and O’Connell labelled this the Kinetic Depth Effect. Wallach, O’Connell, and Neisser (1953) found that once depth had been established...right of fixation in the left eye. The converse is true for objects located 10 uL 0 L-I. M4 ’I! .- 4J Cd > 9 )( C.) U )d 0 0 4- -0 4 C4 J )0 -I,--4 0

  18. Infant Perception of Audio-Visual Speech Synchrony

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2010-01-01

    Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…

  19. Attentional episodes in visual perception

    PubMed Central

    Wyble, Brad; Potter, Mary C; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This paper presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic Simultaneous Type/ Serial Token model or eSTST; Wyble et al 2009a). Breaks between these episodes are punctuated by periods of suppressed attention, better known as the attentional blink (Raymond, Shapiro & Arnell 1992). We test predictions from this model and demonstrate that subjects are able to report more letters from a sequence of four targets presented in a dense temporal cluster, than from a sequence of four targets that are interleaved with non-targets. However, this superior report accuracy comes at a cost in impaired temporal order perception. Further experiments explore the dynamics of multiple episodes, and the boundary conditions that trigger episodic breaks. Finally, we contrast the importance of attentional control, limited resources and memory capacity constructs in the model. PMID:21604913

  20. Optical images of visible and invisible percepts in the primary visual cortex of primates

    PubMed Central

    Macknik, Stephen L.; Haglund, Michael M.

    1999-01-01

    We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363

  1. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the

  2. Compensatory shifts in visual perception are associated with hallucinations in Lewy body disorders.

    PubMed

    Bowman, Alan Robert; Bruce, Vicki; Colbourn, Christopher J; Collerton, Daniel

    2017-01-01

    Visual hallucinations are a common, distressing, and disabling symptom of Lewy body and other diseases. Current models suggest that interactions in internal cognitive processes generate hallucinations. However, these neglect external factors. Pareidolic illusions are an experimental analogue of hallucinations. They are easily induced in Lewy body disease, have similar content to spontaneous hallucinations, and respond to cholinesterase inhibitors in the same way. We used a primed pareidolia task with hallucinating participants with Lewy body disorders (n = 16), non-hallucinating participants with Lewy body disorders (n = 19), and healthy controls (n = 20). Participants were presented with visual "noise" that sometimes contained degraded visual objects and were required to indicate what they saw. Some perceptions were cued in advance by a visual prime. Results showed that hallucinating participants were impaired in discerning visual signals from noise, with a relaxed criterion threshold for perception compared to both other groups. After the presentation of a visual prime, the criterion was comparable to the other groups. The results suggest that participants with hallucinations compensate for perceptual deficits by relaxing perceptual criteria, at a cost of seeing things that are not there, and that visual cues regularize perception. This latter finding may provide a mechanism for understanding the interaction between environments and hallucinations.

  3. Time-Resolved Influences of Functional DAT1 and COMT Variants on Visual Perception and Post-Processing

    PubMed Central

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Background Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Results Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500–1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Conclusions Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems. PMID:22844499

  4. Time-resolved influences of functional DAT1 and COMT variants on visual perception and post-processing.

    PubMed

    Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred

    2012-01-01

    Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.

  5. Assessment of visual perception in adolescents with a history of central coordination disorder in early life – 15-year follow-up study

    PubMed Central

    Kowalski, Ireneusz M.; Domagalska, Małgorzata; Szopa, Andrzej; Dwornik, Michał; Kujawa, Jolanta; Stępień, Agnieszka; Śliwiński, Zbigniew

    2012-01-01

    Introduction Central nervous system damage in early life results in both quantitative and qualitative abnormalities of psychomotor development. Late sequelae of these disturbances may include visual perception disorders which not only affect the ability to read and write but also generally influence the child's intellectual development. This study sought to determine whether a central coordination disorder (CCD) in early life treated according to Vojta's method with elements of the sensory integration (S-I) and neuro-developmental treatment (NDT)/Bobath approaches affects development of visual perception later in life. Material and methods The study involved 44 participants aged 15-16 years, including 19 diagnosed with moderate or severe CCD in the neonatal period, i.e. during the first 2-3 months of life, with diagnosed mild degree neonatal encephalopathy due to perinatal anoxia, and 25 healthy people without a history of developmental psychomotor disturbances in the neonatal period. The study tool was a visual perception IQ test comprising 96 graphic tasks. Results The study revealed equal proportions of participants (p < 0.05) defined as very skilled (94-96), skilled (91-94), aerage (71-91), poor (67-71), and very poor (0-67) in both groups. These results mean that adolescents with a history of CCD in the neonatal period did not differ with regard to the level of visual perception from their peers who had not demonstrated psychomotor development disorders in the neonatal period. Conclusions Early treatment of children with CCD affords a possibility of normalising their psychomotor development early enough to prevent consequences in the form of cognitive impairments in later life. PMID:23185199

  6. A systematic review of the technology-based assessment of visual perception and exploration behaviour in association football.

    PubMed

    McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan

    2018-04-01

    To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.

  7. Correlated individual differences suggest a common mechanism underlying metacognition in visual perception and visual short-term memory.

    PubMed

    Samaha, Jason; Postle, Bradley R

    2017-11-29

    Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).

  8. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment

    PubMed Central

    PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.

    2014-01-01

    Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648

  9. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    PubMed

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  10. Affective Education for Visually Impaired Children.

    ERIC Educational Resources Information Center

    Locke, Don C.; Gerler, Edwin R., Jr.

    1981-01-01

    Evaluated the effectiveness of the Human Development Program (HDP) and the Developing Understanding of Self and Others (DUSO) program used with visually impaired children. Although HDP and DUSO affected the behavior of visually impaired children, they did not have any effect on children's attitudes toward school. (RC)

  11. Visually induced plasticity of auditory spatial perception in macaques.

    PubMed

    Woods, Timothy M; Recanzone, Gregg H

    2004-09-07

    When experiencing spatially disparate visual and auditory stimuli, a common percept is that the sound originates from the location of the visual stimulus, an illusion known as the ventriloquism effect. This illusion can persist for tens of minutes, a phenomenon termed the ventriloquism aftereffect. The underlying neuronal mechanisms of this rapidly induced plasticity remain unclear; indeed, it remains untested whether similar multimodal interactions occur in other species. We therefore tested whether macaque monkeys experience the ventriloquism aftereffect similar to the way humans do. The ability of two monkeys to determine which side of the midline a sound was presented from was tested before and after a period of 20-60 min in which the monkeys experienced either spatially identical or spatially disparate auditory and visual stimuli. In agreement with human studies, the monkeys did experience a shift in their auditory spatial perception in the direction of the spatially disparate visual stimulus, and the aftereffect did not transfer across sounds that differed in frequency by two octaves. These results show that macaque monkeys experience the ventriloquism aftereffect similar to the way humans do in all tested respects, indicating that these multimodal interactions are a basic phenomenon of the central nervous system.

  12. Parents' Perceptions of Physical Activity for Their Children with Visual Impairments

    ERIC Educational Resources Information Center

    Perkins, Kara; Columna, Luis; Lieberman, Lauren; Bailey, JoEllen

    2013-01-01

    Introduction: Ongoing communication with parents and the acknowledgment of their preferences and expectations are crucial to promote the participation of physical activity by children with visual impairments. Purpose: The study presented here explored parents' perceptions of physical activity for their children with visual impairments and explored…

  13. Analysis of Factors Affecting Women of Childbearing Age to Screen Using Visual Inspection with Acetic Acid.

    PubMed

    Sidabutar, Sondang; Martini, Santi; Wahyuni, Chatarina Umbul

    2017-02-01

    The purpose of this study was to evaluate patient factors such as knowledge, attitude, motivation, perception, socio-economic status and travel time to health facilities and assess how these factors affected patients' decision to pursue cervical cancer screening with visual inspection with acetic acid (VIA). A total of 80 women of childbearing age who visited Kenjeran and Balongsari Public Health Centers for health assessments were involved in this study. Patients who agreed to participate in the study underwent a verbal questionnaire to evaluate various factors. Bivariate analysis concluded that knowledge, attitude, motivation, perception, socioeconomic status, and travel time to health facilities were significantly different between women who received VIA screening and women who did not receive VIA screening ( p < 0.05). The factors of knowledge, attitudes, motivation, perception, socio-economic status, and the travel time to health facilities accounted for 2.920-fold, 2.043-fold, 3.704-fold, 2.965-fold, 3.198-fold and 2.386-fold possibility, respectively, of patients to pursue cervical cancer screening with VIA. Multivariate analysis showed that perception, socio-economic status, and travel time to health facilities were the most important factors influencing whether or not women pursued VIA screening. Knowledge, attitude, motivation, perception, socio-economic status, and travel time to health facilities appears to affect women's' decision to pursue cervical cancer screening with VIA, with the largest intake being the motivational factor.

  14. Self-development of visual space perception by learning from the hand

    NASA Astrophysics Data System (ADS)

    Chung, Jae-Moon; Ohnishi, Noboru

    1998-10-01

    Animals have been considered to develop ability for interpreting images captured on their retina by themselves gradually from their birth. For this they do not need external supervisor. We think that the visual function is obtained together with the development of hand reaching and grasping operations which are executed by active interaction with environment. On the viewpoint of hand teaches eye, this paper shows how visual space perception is developed in a simulated robot. The robot has simplified human-like structure used for hand-eye coordination. From the experimental results it may be possible to validate the method to describe how visual space perception of biological systems is developed. In addition the description gives a way to self-calibrate the vision of intelligent robot based on learn by doing manner without external supervision.

  15. The role of the right hemisphere in form perception and visual gnosis organization.

    PubMed

    Belyi, B I

    1988-06-01

    Peculiarities of series of picture interpretations and Rorschach test results in patients with unilateral benign hemispheric tumours are discussed. It is concluded that visual perception in the right hemisphere has hierarchic structure, i.e., each successive area from the occipital lobe towards the frontal having a more complicated function. Visual engrams are distributed over the right hemisphere in a manner similar to the way the visual information is recorded in holographic systems. In any impairment of the right hemisphere a tendency towards whole but unclear vision arises. The preservation of lower levels of visual perception provides for clear vision only of small parts of the image. Thus, confabulatory phenomena arises, which are specific for right hemispheric lesions.

  16. Coordinates of Human Visual and Inertial Heading Perception.

    PubMed

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.

  17. Coordinates of Human Visual and Inertial Heading Perception

    PubMed Central

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865

  18. Color categories affect pre-attentive color perception.

    PubMed

    Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna

    2010-10-01

    Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  20. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  1. On the advantage of being left-handed in volleyball: further evidence of the specificity of skilled visual perception.

    PubMed

    Loffing, Florian; Schorer, Jörg; Hagemann, Norbert; Baker, Joseph

    2012-02-01

    High ball speeds and close distances between competitors require athletes in interactive sports to correctly anticipate an opponent's intentions in order to render appropriate reactions. Although it is considered crucial for successful performance, such skill appears impaired when athletes are confronted with a left-handed opponent, possibly because of athletes' reduced perceptual familiarity with rarely encountered left-handed actions. To test this negative perceptual frequency effect hypothesis, we invited 18 skilled and 18 novice volleyball players to predict shot directions of left- and right-handed attacks in a video-based visual anticipation task. In accordance with our predictions, and with recent reports on laterality differences in visual perception, the outcome of left-handed actions was significantly less accurately predicted than the outcome of right-handed attacks. In addition, this left-right bias was most distinct when predictions had to be based on preimpact (i.e., before hand-ball contact) kinematic cues, and skilled players were generally more affected by the opponents' handedness than were novices. The study's findings corroborate the assumption that skilled visual perception is attuned to more frequently encountered actions.

  2. Visual Attention and Perception in Three-Dimensional Space

    DTIC Science & Technology

    1992-01-01

    Hughes & Zimba , 1987). Themarrows’ in the near-far condition (Fig. 1c) were actually wedges that pointed either toward or away from the subject, with their...1992). The increase in saccade latencies in the lower visual field. Perception and Psychophysics (in press). Hughes, H. C., & Zimba , L D. (1987

  3. Relationship Between Auditory Context and Visual Distance Perception: Effect of Musical Expertise in the Ability to Translate Reverberation Cues Into Room-Size Perception.

    PubMed

    Etchemendy, Pablo E; Spiousas, Ignacio; Vergara, Ramiro

    2018-01-01

    In a recently published work by our group [ Scientific Reports, 7, 7189 (2017)], we performed experiments of visual distance perception in two dark rooms with extremely different reverberation times: one anechoic ( T ∼ 0.12 s) and the other reverberant ( T ∼ 4 s). The perceived distance of the targets was systematically greater in the reverberant room when contrasted to the anechoic chamber. Participants also provided auditorily perceived room-size ratings which were greater for the reverberant room. Our hypothesis was that distance estimates are affected by room size, resulting in farther responses for the room perceived larger. Of much importance to the task was the subjects' ability to infer room size from reverberation. In this article, we report a postanalysis showing that participants having musical expertise were better able to extract and translate reverberation cues into room-size information than nonmusicians. However, the degree to which musical expertise affects visual distance estimates remains unclear.

  4. Cholinergic, But Not Dopaminergic or Noradrenergic, Enhancement Sharpens Visual Spatial Perception in Humans

    PubMed Central

    Wallace, Deanna L.

    2017-01-01

    The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains

  5. Behind Mathematical Learning Disabilities: What about Visual Perception and Motor Skills?

    ERIC Educational Resources Information Center

    Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde

    2012-01-01

    In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…

  6. Audio-Visual Speech Perception: A Developmental ERP Investigation

    ERIC Educational Resources Information Center

    Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…

  7. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    PubMed Central

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2014-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407

  8. Colour homogeneity and visual perception of age, health and attractiveness of male facial skin.

    PubMed

    Fink, B; Matts, P J; D'Emiliano, D; Bunse, L; Weege, B; Röder, S

    2012-12-01

    Visible facial skin condition in females is known to affect perception of age, health and attractiveness. Skin colour distribution in shape- and topography-standardized female faces, driven by localized melanin and haemoglobin, can account for up to twenty years of apparent age perception. Although this is corroborated by an ability to discern female age even in isolated, non-contextual skin images, a similar effect in the perception of male skin is yet to be demonstrated. To investigate the effect of skin colour homogeneity and chromophore distribution on the visual perception of age, health and attractiveness of male facial skin. Cropped images from the cheeks of facial images of 160 Caucasian British men aged 10-70 years were blind-rated for age, health and attractiveness by a total of 308 participants. In addition, the homogeneity of skin images and corresponding eumelanin/oxyhaemoglobin concentration maps were analysed objectively using Haralick's image segmentation algorithm. Isolated skin images taken from the cheeks of younger males were judged as healthier and more attractive. Perception of age, health and attractiveness was strongly related to melanin and haemoglobin distribution, whereby more even distributions led to perception of younger age and greater health and attractiveness. The evenness of melanized features was a stronger cue for age perception, whereas haemoglobin distribution was associated more strongly with health and attractiveness perception. Male skin colour homogeneity, driven by melanin and haemoglobin distribution, influences perception of age, health and attractiveness. © 2011 The Authors. Journal of the European Academy of Dermatology and Venereology © 2011 European Academy of Dermatology and Venereology.

  9. How do visual and postural cues combine for self-tilt perception during slow pitch rotations?

    PubMed

    Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L

    2014-11-01

    Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Visual Perception and Reading: New Clues to Patterns of Dysfunction Across Multiple Visual Channels in Developmental Dyslexia.

    PubMed

    Pina Rodrigues, Ana; Rebola, José; Jorge, Helena; Ribeiro, Maria José; Pereira, Marcelino; van Asselen, Marieke; Castelo-Branco, Miguel

    2017-01-01

    The specificity of visual channel impairment in dyslexia has been the subject of much controversy. The purpose of this study was to determine if a differential pattern of impairment can be verified between visual channels in children with developmental dyslexia, and in particular, if the pattern of deficits is more conspicuous in tasks where the magnocellular-dorsal system recruitment prevails. Additionally, we also aimed at investigating the association between visual perception thresholds and reading. In the present case-control study, we compared perception thresholds of 33 children diagnosed with developmental dyslexia and 34 controls in a speed discrimination task, an achromatic contrast sensitivity task, and a chromatic contrast sensitivity task. Moreover, we addressed the correlation between the different perception thresholds and reading performance, as assessed by means of a standardized reading test (accuracy and fluency). Group comparisons were performed by the Mann-Whitney U test, and Spearman's rho was used as a measure of correlation. Results showed that, when compared to controls, children with dyslexia were more impaired in the speed discrimination task, followed by the achromatic contrast sensitivity task, with no impairment in the chromatic contrast sensitivity task. These results are also consistent with the magnocellular theory since the impairment profile of children with dyslexia in the visual threshold tasks reflected the amount of magnocellular-dorsal stream involvement. Moreover, both speed and achromatic thresholds were significantly correlated with reading performance, in terms of accuracy and fluency. Notably, chromatic contrast sensitivity thresholds did not correlate with any of the reading measures. Our evidence stands in favor of a differential visual channel deficit in children with developmental dyslexia and contributes to the debate on the pathophysiology of reading impairments.

  11. Analysis of EEG signals related to artists and nonartists during visual perception, mental imagery, and rest using approximate entropy.

    PubMed

    Shourie, Nasrin; Firoozabadi, Mohammad; Badie, Kambiz

    2014-01-01

    In this paper, differences between multichannel EEG signals of artists and nonartists were analyzed during visual perception and mental imagery of some paintings and at resting condition using approximate entropy (ApEn). It was found that ApEn is significantly higher for artists during the visual perception and the mental imagery in the frontal lobe, suggesting that artists process more information during these conditions. It was also observed that ApEn decreases for the two groups during the visual perception due to increasing mental load; however, their variation patterns are different. This difference may be used for measuring progress in novice artists. In addition, it was found that ApEn is significantly lower during the visual perception than the mental imagery in some of the channels, suggesting that visual perception task requires more cerebral efforts.

  12. Relationships between Categorical Perception of Phonemes, Phoneme Awareness, and Visual Attention Span in Developmental Dyslexia.

    PubMed

    Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane

    2016-01-01

    We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants' VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition.

  13. Relationships between Categorical Perception of Phonemes, Phoneme Awareness, and Visual Attention Span in Developmental Dyslexia

    PubMed Central

    Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane

    2016-01-01

    We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants’ VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition. PMID:26950210

  14. Visual Perception of Touchdown Point During Simulated Landing

    ERIC Educational Resources Information Center

    Palmisano, Stephen; Gillam, Barbara

    2005-01-01

    Experiments examined the accuracy of visual touchdown point perception during oblique descents (1.5?-15?) toward a ground plane consisting of (a) randomly positioned dots, (b) a runway outline, or (c) a grid. Participants judged whether the perceived touchdown point was above or below a probe that appeared at a random position following each…

  15. Global motion perception deficits in autism are reflected as early as primary visual cortex

    PubMed Central

    Thomas, Cibu; Kravitz, Dwight J.; Wallace, Gregory L.; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I.

    2014-01-01

    Individuals with autism are often characterized as ‘seeing the trees, but not the forest’—attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15–27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that

  16. Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.

    PubMed

    Kok, Peter; de Lange, Floris P

    2014-07-07

    An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. The dynamic-stimulus advantage of visual symmetry perception.

    PubMed

    Niimi, Ryosuke; Watanabe, Katsumi; Yokosawa, Kazuhiko

    2008-09-01

    It has been speculated that visual symmetry perception from dynamic stimuli involves mechanisms different from those for static stimuli. However, previous studies found no evidence that dynamic stimuli lead to active temporal processing and improve symmetry detection. In this study, four psychophysical experiments investigated temporal processing in symmetry perception using both dynamic and static stimulus presentations of dot patterns. In Experiment 1, rapid successive presentations of symmetric patterns (e.g., 16 patterns per 853 ms) produced more accurate discrimination of orientations of symmetry axes than static stimuli (single pattern presented through 853 ms). In Experiments 2-4, we confirmed that the dynamic-stimulus advantage depended upon presentation of a large number of unique patterns within a brief period (853 ms) in the dynamic conditions. Evidently, human vision takes advantage of temporal processing for symmetry perception from dynamic stimuli.

  18. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  19. Driving with indirect viewing sensors: understanding the visual perception issues

    NASA Astrophysics Data System (ADS)

    O'Kane, Barbara L.

    1996-05-01

    Visual perception is one of the most important elements of driving in that it enables the driver to understand and react appropriately to the situation along the path of the vehicle. The visual perception of the driver is enabled to the greatest extent while driving during the day. Noticeable decrements in visual acuity, range of vision, depth of field and color perception occur at night and under certain weather conditions. Indirect viewing sensors, utilizing various technologies and spectral bands, may assist the driver's normal mode of driving. Critical applications in the military as well as other official activities may require driving at night without headlights. In these latter cases, it is critical that the device, being the only source of scene information, provide the required scene cues needed for driving on, and often-times, off road. One can speculate about the scene information that a driver needs, such as road edges, terrain orientation, people and object detection in or near the path of the vehicle, and so on. But the perceptual qualities of the scene that give rise to these perceptions are little known and thus not quantified for evaluation of indirect viewing devices. This paper discusses driving with headlights and compares the scene content with that provided by a thermal system in the 8 - 12 micrometers micron spectral band, which may be used for driving at some time. The benefits and advantages of each are discussed as well as their limitations in providing information useful for the driver who must make rapid and critical decisions based upon the scene content available. General recommendations are made for potential avenues of development to overcome some of these limitations.

  20. PERCEPT: indoor navigation for the blind and visually impaired.

    PubMed

    Ganz, Aura; Gandhi, Siddhesh Rajan; Schafer, James; Singh, Tushar; Puleo, Elaine; Mullett, Gary; Wilson, Carole

    2011-01-01

    In order to enhance the perception of indoor and unfamiliar environments for the blind and visually-impaired, we introduce the PERCEPT system that supports a number of unique features such as: a) Low deployment and maintenance cost; b) Scalability, i.e. we can deploy the system in very large buildings; c) An on-demand system that does not overwhelm the user, as it offers small amounts of information on demand; and d) Portability and ease-of-use, i.e., the custom handheld device carried by the user is compact and instructions are received audibly.

  1. Visualizing the Perception Filter and Breaching It with Active-Learning Strategies

    ERIC Educational Resources Information Center

    White, Harold B.

    2012-01-01

    Teachers' perception filter operates in all realms of their consciousness. It plays an important part in what and how students learn and should play a central role in what and how they teach. This may be obvious, but having a visual model of a perception filter can guide the way they think about education. In this article, the author talks about…

  2. Orientation of selective effects of body tilt on visually induced perception of self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.

  3. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  4. Execution of saccadic eye movements affects speed perception

    PubMed Central

    Goettker, Alexander; Braun, Doris I.; Schütz, Alexander C.; Gegenfurtner, Karl R.

    2018-01-01

    Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. PMID:29440494

  5. Cognitive processing in the primary visual cortex: from perception to memory.

    PubMed

    Supèr, Hans

    2002-01-01

    The primary visual cortex is the first cortical area of the visual system that receives information from the external visual world. Based on the receptive field characteristics of the neurons in this area, it has been assumed that the primary visual cortex is a pure sensory area extracting basic elements of the visual scene. This information is then subsequently further processed upstream in the higher-order visual areas and provides us with perception and storage of the visual environment. However, recent findings show that such neural implementations are observed in the primary visual cortex. These neural correlates are expressed by the modulated activity of the late response of a neuron to a stimulus, and most likely depend on recurrent interactions between several areas of the visual system. This favors the concept of a distributed nature of visual processing in perceptual organization.

  6. The Influence of Averageness on Adults' Perceptions of Attractiveness: The Effect of Early Visual Deprivation.

    PubMed

    Vingilis-Jaremko, Larissa; Maurer, Daphne; Rhodes, Gillian; Jeffery, Linda

    2016-08-03

    Adults who missed early visual input because of congenital cataracts later have deficits in many aspects of face processing. Here we investigated whether they make normal judgments of facial attractiveness. In particular, we studied whether their perceptions are affected normally by a face's proximity to the population mean, as is true of typically developing adults, who find average faces to be more attractive than most other faces. We compared the judgments of facial attractiveness of 12 cataract-reversal patients to norms established from 36 adults with normal vision. Participants viewed pairs of adult male and adult female faces that had been transformed 50% toward and 50% away from their respective group averages, and selected which face was more attractive. Averageness influenced patients' judgments of attractiveness, but to a lesser extent than controls. The results suggest that cataract-reversal patients are able to develop a system for representing faces with a privileged position for an average face, consistent with evidence from identity aftereffects. However, early visual experience is necessary to set up the neural architecture necessary for averageness to influence perceptions of attractiveness with its normal potency. © The Author(s) 2016.

  7. Self-motion perception in autism is compromised by visual noise but integrated optimally across multiple senses

    PubMed Central

    Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.

    2015-01-01

    Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373

  8. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    PubMed

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  9. How (and why) the visual control of action differs from visual perception

    PubMed Central

    Goodale, Melvyn A.

    2014-01-01

    Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions. PMID:24789899

  10. Involvement of Right STS in Audio-Visual Integration for Affective Speech Demonstrated Using MEG

    PubMed Central

    Hagan, Cindy C.; Woods, Will; Johnson, Sam; Green, Gary G. R.; Young, Andrew W.

    2013-01-01

    Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals. PMID:23950977

  11. Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG.

    PubMed

    Hagan, Cindy C; Woods, Will; Johnson, Sam; Green, Gary G R; Young, Andrew W

    2013-01-01

    Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.

  12. Evaluation of a visual risk communication tool: effects on knowledge and perception of blood transfusion risk.

    PubMed

    Lee, D H; Mehta, M D

    2003-06-01

    Effective risk communication in transfusion medicine is important for health-care consumers, but understanding the numerical magnitude of risks can be difficult. The objective of this study was to determine the effect of a visual risk communication tool on the knowledge and perception of transfusion risk. Laypeople were randomly assigned to receive transfusion risk information with either a written or a visual presentation format for communicating and comparing the probabilities of transfusion risks relative to other hazards. Knowledge of transfusion risk was ascertained with a multiple-choice quiz and risk perception was ascertained by psychometric scaling and principal components analysis. Two-hundred subjects were recruited and randomly assigned. Risk communication with both written and visual presentation formats increased knowledge of transfusion risk and decreased the perceived dread and severity of transfusion risk. Neither format changed the perceived knowledge and control of transfusion risk, nor the perceived benefit of transfusion. No differences in knowledge or risk perception outcomes were detected between the groups randomly assigned to written or visual presentation formats. Risk communication that incorporates risk comparisons in either written or visual presentation formats can improve knowledge and reduce the perception of transfusion risk in laypeople.

  13. How to make a good animation: A grounded cognition model of how visual representation design affects the construction of abstract physics knowledge

    NASA Astrophysics Data System (ADS)

    Chen, Zhongzhou; Gladding, Gary

    2014-06-01

    Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition, which leads to a significant variance in their effectiveness. In this paper we propose a cognitive mechanism based on grounded cognition, suggesting that visual perception affects understanding by activating "perceptual symbols": the basic cognitive unit used by the brain to construct a concept. A good visual representation activates perceptual symbols that are essential for the construction of the represented concept, whereas a bad representation does the opposite. As a proof of concept, we conducted a clinical experiment in which participants received three different versions of a multimedia tutorial teaching the integral expression of electric potential. The three versions were only different by the details of the visual representation design, only one of which contained perceptual features that activate perceptual symbols essential for constructing the idea of "accumulation." On a following post-test, participants receiving this version of tutorial significantly outperformed those who received the other two versions of tutorials designed to mimic conventional visual representations used in classrooms.

  14. Visual illusion of tool use recalibrates tactile perception

    PubMed Central

    Miller, Luke E.; Longo, Matthew R.; Saygin, Ayse P.

    2018-01-01

    Brief use of a tool recalibrates multisensory representations of the user’s body, a phenomenon called tool embodiment. Despite two decades of research, little is known about its boundary conditions. It has been widely argued that embodiment requires active tool use, suggesting a critical role for somatosensory and motor feedback. The present study used a visual illusion to cast doubt on this view. We used a mirror-based setup to induce a visual experience of tool use with an arm that was in fact stationary. Following illusory tool use, tactile perception was recalibrated on this stationary arm, and with equal magnitude as physical use. Recalibration was not found following illusory passive tool holding, and could not be accounted for by sensory conflict or general interhemispheric plasticity. These results suggest visual tool-use signals play a critical role in driving tool embodiment. PMID:28196765

  15. Visual Perception and Visual-Motor Integration in Very Preterm and/or Very Low Birth Weight Children: A Meta-Analysis

    ERIC Educational Resources Information Center

    Geldof, C. J. A.; van Wassenaer, A. G.; de Kieviet, J. F.; Kok, J. H.; Oosterlaan, J.

    2012-01-01

    A range of neurobehavioral impairments, including impaired visual perception and visual-motor integration, are found in very preterm born children, but reported findings show great variability. We aimed to aggregate the existing literature using meta-analysis, in order to provide robust estimates of the effect of very preterm birth on visual…

  16. Global motion perception deficits in autism are reflected as early as primary visual cortex.

    PubMed

    Robertson, Caroline E; Thomas, Cibu; Kravitz, Dwight J; Wallace, Gregory L; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I

    2014-09-01

    Individuals with autism are often characterized as 'seeing the trees, but not the forest'-attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15-27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced

  17. Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location

    PubMed Central

    Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene

    2017-01-01

    Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005

  18. Visual hallucinations in schizophrenia: confusion between imagination and perception.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2008-05-01

    An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.

  19. [Visual perception of Kanji characters and complicated figures. II. Visual P300 event-related potentials in patients with mental retardation].

    PubMed

    Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko

    2002-11-01

    In order to objectively evaluate visual perception of patients with mental retardation (MR), the P300 event-related potentials (ERPs) for visual oddball tasks were recorded in 26 patients and 13 age-matched healthy volunteers. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. In almost all MR patients visual P300 was observed, however, the peak latency was significantly prolonged compared to control subjects. There was no significant difference of P300 latency among the three tasks. The distribution pattern of P300 in MR patients was different from that in the controls and the amplitudes in the frontal region was larger in MR patients. The latency decreased with age even in both groups. The developmental change of P300 latency corresponded to developmental age rather than the chronological age. These findings suggest that MR patients have impairment in processing of visual perception. Assessment of P300 latencies to the visual stimuli may be useful as an objective indicator of mental deficit.

  20. Visual shape perception as Bayesian inference of 3D object-centered shape representations.

    PubMed

    Erdogan, Goker; Jacobs, Robert A

    2017-11-01

    Despite decades of research, little is known about how people visually perceive object shape. We hypothesize that a promising approach to shape perception is provided by a "visual perception as Bayesian inference" framework which augments an emphasis on visual representation with an emphasis on the idea that shape perception is a form of statistical inference. Our hypothesis claims that shape perception of unfamiliar objects can be characterized as statistical inference of 3D shape in an object-centered coordinate system. We describe a computational model based on our theoretical framework, and provide evidence for the model along two lines. First, we show that, counterintuitively, the model accounts for viewpoint-dependency of object recognition, traditionally regarded as evidence against people's use of 3D object-centered shape representations. Second, we report the results of an experiment using a shape similarity task, and present an extensive evaluation of existing models' abilities to account for the experimental data. We find that our shape inference model captures subjects' behaviors better than competing models. Taken as a whole, our experimental and computational results illustrate the promise of our approach and suggest that people's shape representations of unfamiliar objects are probabilistic, 3D, and object-centered. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Assistive Technology Competencies of Teachers of Students with Visual Impairments: A Comparison of Perceptions

    ERIC Educational Resources Information Center

    Zhou, Li; Smith, Derrick W.; Parker, Amy T.; Griffin-Shirley, Nora

    2011-01-01

    This study surveyed teachers of students with visual impairments in Texas on their perceptions of a set of assistive technology competencies developed for teachers of students with visual impairments by Smith and colleagues (2009). Differences in opinion between practicing teachers of students with visual impairments and Smith's group of…

  2. Testing the generality of the zoom-lens model: Evidence for visual-pathway specific effects of attended-region size on perception.

    PubMed

    Goodhew, Stephanie C; Lawrence, Rebecca K; Edwards, Mark

    2017-05-01

    There are volumes of information available to process in visual scenes. Visual spatial attention is a critically important selection mechanism that prevents these volumes from overwhelming our visual system's limited-capacity processing resources. We were interested in understanding the effect of the size of the attended area on visual perception. The prevailing model of attended-region size across cognition, perception, and neuroscience is the zoom-lens model. This model stipulates that the magnitude of perceptual processing enhancement is inversely related to the size of the attended region, such that a narrow attended-region facilitates greater perceptual enhancement than a wider region. Yet visual processing is subserved by two major visual pathways (magnocellular and parvocellular) that operate with a degree of independence in early visual processing and encode contrasting visual information. Historically, testing of the zoom-lens has used measures of spatial acuity ideally suited to parvocellular processing. This, therefore, raises questions about the generality of the zoom-lens model to different aspects of visual perception. We found that while a narrow attended-region facilitated spatial acuity and the perception of high spatial frequency targets, it had no impact on either temporal acuity or the perception of low spatial frequency targets. This pattern also held up when targets were not presented centrally. This supports the notion that visual attended-region size has dissociable effects on magnocellular versus parvocellular mediated visual processing.

  3. Phosphene Perception Relates to Visual Cortex Glutamate Levels and Covaries with Atypical Visuospatial Awareness.

    PubMed

    Terhune, Devin B; Murray, Elizabeth; Near, Jamie; Stagg, Charlotte J; Cowey, Alan; Cohen Kadosh, Roi

    2015-11-01

    Phosphenes are illusory visual percepts produced by the application of transcranial magnetic stimulation to occipital cortex. Phosphene thresholds, the minimum stimulation intensity required to reliably produce phosphenes, are widely used as an index of cortical excitability. However, the neural basis of phosphene thresholds and their relationship to individual differences in visual cognition are poorly understood. Here, we investigated the neurochemical basis of phosphene perception by measuring basal GABA and glutamate levels in primary visual cortex using magnetic resonance spectroscopy. We further examined whether phosphene thresholds would relate to the visuospatial phenomenology of grapheme-color synesthesia, a condition characterized by atypical binding and involuntary color photisms. Phosphene thresholds negatively correlated with glutamate concentrations in visual cortex, with lower thresholds associated with elevated glutamate. This relationship was robust, present in both controls and synesthetes, and exhibited neurochemical, topographic, and threshold specificity. Projector synesthetes, who experience color photisms as spatially colocalized with inducing graphemes, displayed lower phosphene thresholds than associator synesthetes, who experience photisms as internal images, with both exhibiting lower thresholds than controls. These results suggest that phosphene perception is driven by interindividual variation in glutamatergic activity in primary visual cortex and relates to cortical processes underlying individual differences in visuospatial awareness. © The Author 2015. Published by Oxford University Press.

  4. Vestibular signals in macaque extrastriate visual cortex are functionally appropriate for heading perception

    PubMed Central

    Liu, Sheng; Angelaki, Dora E.

    2009-01-01

    Visual and vestibular signals converge onto the dorsal medial superior temporal area (MSTd) of the macaque extrastriate visual cortex, which is thought to be involved in multisensory heading perception for spatial navigation. Peripheral otolith information, however, is ambiguous and cannot distinguish linear accelerations experienced during self-motion from those due to changes in spatial orientation relative to gravity. Here we show that, unlike peripheral vestibular sensors but similar to lobules 9 and 10 of the cerebellar vermis (nodulus and uvula), MSTd neurons respond selectively to heading and not to changes in orientation relative to gravity. In support of a role in heading perception, MSTd vestibular responses are also dominated by velocity-like temporal dynamics, which might optimize sensory integration with visual motion information. Unlike the cerebellar vermis, however, MSTd neurons also carry a spatial orientation-independent rotation signal from the semicircular canals, which could be useful in compensating for the effects of head rotation on the processing of optic flow. These findings show that vestibular signals in MSTd are appropriately processed to support a functional role in multisensory heading perception. PMID:19605631

  5. Asymmetries for the Visual Expression and Perception of Speech

    ERIC Educational Resources Information Center

    Nicholls, Michael E. R.; Searle, Dara A.

    2006-01-01

    This study explored asymmetries for movement, expression and perception of visual speech. Sixteen dextral models were videoed as they articulated: "bat," "cat," "fat," and "sat." Measurements revealed that the right side of the mouth was opened wider and for a longer period than the left. The asymmetry was accentuated at the beginning and ends of…

  6. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  7. Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.

    PubMed

    Madan, Christopher R; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B; Sommer, Tobias

    2017-01-01

    Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term 'visual complexity.' Visual complexity can be described as, "a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components." Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an 'arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.

  8. Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye

    PubMed Central

    Madan, Christopher R.; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B.; Sommer, Tobias

    2018-01-01

    Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli. PMID:29403412

  9. Visual speech perception in foveal and extrafoveal vision: further implications for divisions in hemispheric projections.

    PubMed

    Jordan, Timothy R; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B

    2014-01-01

    When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.

  10. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  11. Individual differences in visual motion perception and neurotransmitter concentrations in the human brain.

    PubMed

    Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M

    2017-02-19

    Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  12. Perceptions of Visual Literacy. Selected Readings from the Annual Conference of the International Visual Literacy Association (21st, Scottsdale, Arizona, October 1989).

    ERIC Educational Resources Information Center

    Braden, Roberts A., Ed.; And Others

    These proceedings contain 37 papers from 51 authors noted for their expertise in the field of visual literacy. The collection is divided into three sections: (1) "Examining Visual Literacy" (including, in addition to a 7-year International Visual Literacy Association bibliography covering the period from 1983-1989, papers on the perception of…

  13. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    PubMed

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  14. Exploration of complex visual feature spaces for object perception

    PubMed Central

    Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.

    2014-01-01

    The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features

  15. Perceiving while acting: action affects perception.

    PubMed

    Schubö, Anna; Prinz, Wolfgang; Aschersleben, Gisa

    2004-08-01

    In two experiments we studied how motor responses affect stimulus encoding when stimuli and responses are functionally unrelated and merely overlap in time. Such R-S effects across S-R assignments have been reported by Schubö, Aschersleben, and Prinz (2001), who found that stimulus encoding was affected by concurrent response execution in the sense of a contrast (i.e., emphasizing differences). The present study aimed at elucidating the mechanisms underlying this effect. Experiment 1 studied the time course of the R-S effect. Contrast was only obtained for short intertrial intervals (ITIs). With long ITIs contrast turned into assimilation (i.e., emphasizing similarities). Experiment 2 excluded an interpretation of the assimilation effect in terms of motor repetition. Our findings support the notion of a shared representational domain for perception and action control, and suggest that contrast between stimulus and response codes emerges when two S-R assignments compete with each other in perception. When perceptual competition is over, assimilation emerges in memory.

  16. Neuro-cognitive mechanisms of conscious and unconscious visual perception: From a plethora of phenomena to general principles

    PubMed Central

    Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael

    2011-01-01

    Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669

  17. Sampling of post-Riley visual artists surreptitiously probing perception

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.

    2003-06-01

    Attending any conference on visual perception undoubtedly leaves one exposed to the work of Salvador Dali, whose extended phase of work exploring what he dubbed, "the paranoiac-critical method" is very popular as examples of multiple perceptions from conflicting input. While all visual art is intertwined with perceptual science, from convincing three-dimensional illusion during the Renaissance to the isolated visual illusions of Bridget Riley"s Op-Art, direct statements about perception are rarely uttered by the artists in recent times. However, there are still a number of artists working today whose work contains perceptual questions and exemplars that can be of interest to vision scientists and imaging engineers. This talk will start sampling from Op-Art, which is most directly related to psychophysical test stimuli and then will discuss "perceptual installations" from artists such as James Turrell"s, whose focus is often directly on natural light, with no distortions imposed by any capture or display apparatus. His work generally involves installations that use daylight and focus the viewer on its nuanced qualities, such as umbra, air particle interactions, and effects of light adaptation. He is one of the last artists to actively discuss perception. Next we discuss minimal art and electronic art, with video artist Nam June Paik discussing the "intentionally boring" art of minimalism. Another artist using installations is Sandy Skoglund, who creates environments of constant spectral albedo, with the exception of her human occupants. Tom Shannon also uses installations as his media to delve into 3D aspects of depth and perspective, but in an atomized fashion. Beginning with installation concepts, Calvin Collum then adds the restrictive viewpoint of photography to create initially confusing images where the pictorial content and depth features are independent (analogous to the work of Patrick Hughes). Andy Goldsworthy also combines photography with concepts of

  18. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  19. Audio-visual temporal perception in children with restored hearing.

    PubMed

    Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David

    2017-05-01

    It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.

  20. The validity of two clinical tests of visual-motor perception.

    PubMed

    Wallbrown, J D; Wallbrown, F H; Engin, A W

    1977-04-01

    The study investigated the relative efficiency of the Bender and MPD as assessors of achievement-related errors in visual-motor perception. Clinical experience with these two tests suggests that beyond first grade the MPD is more sensitive than the Bender for purposes of measuring deficits in visual-motor perception that interfere with effective classroom learning. The sample was composed of 153 third-grade children from two upper-middle-class elementary schools in a surburban school system in central Ohio. For three of the four achievement criteria, the results were clearly congruent with the hypothesis stated above. That is, SpCD errors from the MPD not only showed significantly higher negative rs with the criteria (reading vocabulary, reading comprehension, and mathematics computation) than Koppitz errors from the Bender, but also accounted for a much higher proportion of the variance in these criteria. Thus, the findings suggest that psychologists engaged in the assessment of older children seriously should consider adding the MPD to their assessment battery.

  1. The perception of visual emotion: comparing different measures of awareness.

    PubMed

    Szczepanowski, Remigiusz; Traczyk, Jakub; Wierzchoń, Michał; Cleeremans, Axel

    2013-03-01

    Here, we explore the sensitivity of different awareness scales in revealing conscious reports on visual emotion perception. Participants were exposed to a backward masking task involving fearful faces and asked to rate their conscious awareness in perceiving emotion in facial expression using three different subjective measures: confidence ratings (CRs), with the conventional taxonomy of certainty, the perceptual awareness scale (PAS), through which participants categorize "raw" visual experience, and post-decision wagering (PDW), which involves economic categorization. Our results show that the CR measure was the most exhaustive and the most graded. In contrast, the PAS and PDW measures suggested instead that consciousness of emotional stimuli is dichotomous. Possible explanations of the inconsistency were discussed. Finally, our results also indicate that PDW biases awareness ratings by enhancing first-order accuracy of emotion perception. This effect was possibly a result of higher motivation induced by monetary incentives. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. The effect of neurofeedback on a brain wave and visual perception in stroke: a randomized control trial.

    PubMed

    Cho, Hwi-Young; Kim, Kitae; Lee, Byounghee; Jung, Jinhwa

    2015-03-01

    [Purpose] This study investigated a brain wave and visual perception changes in stroke subjects using neurofeedback (NFB) training. [Subjects] Twenty-seven stroke subjects were randomly allocated to the NFB (n = 13) group and the control group (n=14). [Methods] Two expert therapists provided the NFB and CON groups with traditional rehabilitation therapy in 30 thirst-minute sessions over the course of 6 weeks. NFB training was provided only to the NFB group. The CON group received traditional rehabilitation therapy only. Before and after the 6-week intervention, a brain wave test and motor free visual perception test (MVPT) were performed. [Results] Both groups showed significant differences in their relative beta wave values and attention concentration quotients. Moreover, the NFB group showed a significant difference in MVPT visual discrimination, form constancy, visual memory, visual closure, spatial relation, raw score, and processing time. [Conclusion] This study demonstrated that NFB training is more effective for increasing concentration and visual perception changes than traditional rehabilitation. In further studies, detailed and diverse investigations should be performed considering the number and characteristics of subjects, and the NFB training period.

  3. Emotion perception, but not affect perception, is impaired with semantic memory loss.

    PubMed

    Lindquist, Kristen A; Gendron, Maria; Barrett, Lisa Feldman; Dickerson, Bradford C

    2014-04-01

    For decades, psychologists and neuroscientists have hypothesized that the ability to perceive emotions on others' faces is inborn, prelinguistic, and universal. Concept knowledge about emotion has been assumed to be epiphenomenal to emotion perception. In this article, we report findings from 3 patients with semantic dementia that cannot be explained by this "basic emotion" view. These patients, who have substantial deficits in semantic processing abilities, spontaneously perceived pleasant and unpleasant expressions on faces, but not discrete emotions such as anger, disgust, fear, or sadness, even in a task that did not require the use of emotion words. Our findings support the hypothesis that discrete emotion concept knowledge helps transform perceptions of affect (positively or negatively valenced facial expressions) into perceptions of discrete emotions such as anger, disgust, fear, and sadness. These findings have important consequences for understanding the processes supporting emotion perception.

  4. Tornado Warning Perception and Response: Integrating the Roles of Visual Design, Demographics, and Hazard Experience.

    PubMed

    Schumann, Ronald L; Ash, Kevin D; Bowser, Gregg C

    2018-02-01

    Recent advancements in severe weather detection and warning dissemination technologies have reduced, but not eliminated, large-casualty tornado hazards in the United States. Research on warning cognition and behavioral response by the public has the potential to further reduce tornado-related deaths and injuries; however, less research has been conducted in this area compared to tornado research in the physical sciences. Extant research in this vein tends to bifurcate. One branch of studies derives from classic risk perception, which investigates cognitive, affective, and sociocultural factors in relation to concern and preparation for uncertain risks. Another branch focuses on psychological, social, and cultural factors implicated in warning response for rapid onset hazards, with attention paid to previous experience and message design. Few studies link risk perceptions with cognition and response as elicited by specific examples of warnings. The present study unites risk perception, cognition, and response approaches by testing the contributions of hypothesized warning response drivers in one set of path models. Warning response is approximated by perceived fear and intended protective action as reported by survey respondents when exposed to hypothetical tornado warning scenarios. This study considers the roles of hazard knowledge acquisition, information-seeking behaviors, previous experience, and sociodemographic factors while controlling for the effects of the visual warning graphic. Findings from the study indicate the primacy of a user's visual interpretation of a warning graphic in shaping tornado warning response. Results also suggest that information-seeking habits, previous tornado experience, and local disaster culture play strong influencing roles in warning response. © 2017 Society for Risk Analysis.

  5. The effects of alphabet and expertise on letter perception

    PubMed Central

    Wiley, Robert W.; Wilson, Colin; Rapp, Brenda

    2016-01-01

    Long-standing questions in human perception concern the nature of the visual features that underlie letter recognition and the extent to which the visual processing of letters is affected by differences in alphabets and levels of viewer expertise. We examined these issues in a novel approach using a same-different judgment task on pairs of letters from the Arabic alphabet with two participant groups—one with no prior exposure to Arabic and one with reading proficiency. Hierarchical clustering and linear mixed-effects modeling of reaction times and accuracy provide evidence that both the specific characteristics of the alphabet and observers’ previous experience with it affect how letters are perceived and visually processed. The findings of this research further our understanding of the multiple factors that affect letter perception and support the view of a visual system that dynamically adjusts its weighting of visual features as expert readers come to more efficiently and effectively discriminate the letters of the specific alphabet they are viewing. PMID:26913778

  6. Contrast affects flicker and speed perception differently

    NASA Technical Reports Server (NTRS)

    Thompson, P.; Stone, L. S.

    1997-01-01

    We have previously shown that contrast affects speed perception, with lower-contrast, drifting gratings perceived as moving slower. In a recent study, we examined the implications of this result on models of speed perception that use the amplitude of the response of linear spatio-temporal filters to determine speed. In this study, we investigate whether the contrast dependence of speed can be understood within the context of models in which speed estimation is made using the temporal frequency of the response of linear spatio-temporal filters. We measured the effect of contrast on flicker perception and found that contrast manipulations produce opposite effects on perceived drift rate and perceived flicker rate, i.e., reducing contrast increases the apparent temporal frequency of counterphase modulated gratings. This finding argues that, if a temporal frequency-based algorithm underlies speed perception, either flicker and speed perception must not be based on the output of the same mechanism or contrast effects on perceived spatial frequency reconcile the disparate effects observed for perceived temporal frequency and speed.

  7. Dynamic Visual Perception and Reading Development in Chinese School Children

    ERIC Educational Resources Information Center

    Meng, Xiangzhi; Cheng-Lai, Alice; Zeng, Biao; Stein, John F.; Zhou, Xiaolin

    2011-01-01

    The development of reading skills may depend to a certain extent on the development of basic visual perception. The magnocellular theory of developmental dyslexia assumes that deficits in the magnocellular pathway, indicated by less sensitivity in perceiving dynamic sensory stimuli, are responsible for a proportion of reading difficulties…

  8. A Bayesian model for visual space perception

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1972-01-01

    A model for visual space perception is proposed that contains desirable features in the theories of Gibson and Brunswik. This model is a Bayesian processor of proximal stimuli which contains three important elements: an internal model of the Markov process describing the knowledge of the distal world, the a priori distribution of the state of the Markov process, and an internal model relating state to proximal stimuli. The universality of the model is discussed and it is compared with signal detection theory models. Experimental results of Kinchla are used as a special case.

  9. The Perception of Cooperativeness Without Any Visual or Auditory Communication

    PubMed Central

    Chang, Dong-Seon; Burger, Franziska; de la Rosa, Stephan

    2015-01-01

    Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal. PMID:27551362

  10. The Perception of Cooperativeness Without Any Visual or Auditory Communication.

    PubMed

    Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan

    2015-12-01

    Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.

  11. The phase of prestimulus alpha oscillations affects tactile perception.

    PubMed

    Ai, Lei; Ro, Tony

    2014-03-01

    Previous studies have shown that neural oscillations in the 8- to 12-Hz range influence sensory perception. In the current study, we examined whether both the power and phase of these mu/alpha oscillations predict successful conscious tactile perception. Near-threshold tactile stimuli were applied to the left hand while electroencephalographic (EEG) activity was recorded over the contralateral right somatosensory cortex. We found a significant inverted U-shaped relationship between prestimulus mu/alpha power and detection rate, suggesting that there is an intermediate level of alpha power that is optimal for tactile perception. We also found a significant difference in phase angle concentration at stimulus onset that predicted whether the upcoming tactile stimulus was perceived or missed. As has been shown in the visual system, these findings suggest that these mu/alpha oscillations measured over somatosensory areas exert a strong inhibitory control on tactile perception and that pulsed inhibition by these oscillations shapes the state of brain activity necessary for conscious perception. They further suggest that these common phasic processing mechanisms across different sensory modalities and brain regions may reflect a common underlying encoding principle in perceptual processing that leads to momentary windows of perceptual awareness.

  12. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    PubMed

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  13. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  14. Effects of Positive Affect on Risk Perceptions in Adolescence and Young Adulthood

    ERIC Educational Resources Information Center

    Haase, Claudia M.; Silbereisen, Rainer K.

    2011-01-01

    Affective influences may play a key role in adolescent risk taking, but have rarely been studied. Using an audiovisual method of affect induction, two experimental studies examined the effect of positive affect on risk perceptions in adolescence and young adulthood. Outcomes were risk perceptions regarding drinking alcohol, smoking a cigarette,…

  15. See it with feeling: affective predictions during object perception

    PubMed Central

    Barrett, L.F.; Bar, Moshe

    2009-01-01

    People see with feeling. We ‘gaze’, ‘behold’, ‘stare’, ‘gape’ and ‘glare’. In this paper, we develop the hypothesis that the brain's ability to see in the present incorporates a representation of the affective impact of those visual sensations in the past. This representation makes up part of the brain's prediction of what the visual sensations stand for in the present, including how to act on them in the near future. The affective prediction hypothesis implies that responses signalling an object's salience, relevance or value do not occur as a separate step after the object is identified. Instead, affective responses support vision from the very moment that visual stimulation begins. PMID:19528014

  16. Biometric Research in Perception and Neurology Related to the Study of Visual Communication.

    ERIC Educational Resources Information Center

    Metallinos, Nikos

    Contemporary research findings in the fields of perceptual psychology and neurology of the human brain that are directly related to the study of visual communication are reviewed and briefly discussed in this paper. Specifically, the paper identifies those major research findings in visual perception that are relevant to the study of visual…

  17. Stimulus size and eccentricity in visually induced perception of horizontally translational self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.

  18. Motion coherence affects human perception and pursuit similarly.

    PubMed

    Beutter, B R; Stone, L S

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion

  19. Motion coherence affects human perception and pursuit similarly

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Stone, L. S.

    2000-01-01

    Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion

  20. Emotion perception, but not affect perception, is impaired with semantic memory loss

    PubMed Central

    Lindquist, Kristen A.; Gendron, Maria; Feldman Barrett, Lisa; Dickerson, Bradford C.

    2014-01-01

    For decades, psychologists and neuroscientists have hypothesized that the ability to perceive emotions on others’ faces is inborn, pre-linguistic, and universal. Concept knowledge about emotion has been assumed to be epiphenomenal to emotion perception. In this paper, we report findings from three patients with semantic dementia that cannot be explained by this “basic emotion” view. These patients, who have substantial deficits in semantic processing abilities, spontaneously perceived pleasant and unpleasant expressions on faces, but not discrete emotions such as anger, disgust, fear, or sadness, even in a task that did not require the use of emotion words. Our findings support the hypothesis that discrete emotion concept knowledge helps transform perceptions of affect (positively or negatively valenced facial expressions) into perceptions of discrete emotions such as anger, disgust, fear and sadness. These findings have important consequences for understanding the processes supporting emotion perception. PMID:24512242

  1. Multisensory integration and the concert experience: An overview of how visual stimuli can affect what we hear

    NASA Astrophysics Data System (ADS)

    Hyde, Jerald R.

    2004-05-01

    It is clear to those who ``listen'' to concert halls and evaluate their degree of acoustical success that it is quite difficult to separate the acoustical response at a given seat from the multi-modal perception of the whole event. Objective concert hall data have been collected for the purpose of finding a link with their related subjective evaluation and ultimately with the architectural correlates which produce the sound field. This exercise, while important, tends to miss the point that a concert or opera event utilizes all the senses of which the sound field and visual stimuli are both major contributors to the experience. Objective acoustical factors point to visual input as being significant in the perception of ``acoustical intimacy'' and with the perception of loudness versus distance in large halls. This paper will review the evidence of visual input as a factor in what we ``hear'' and introduce concepts of perceptual constancy, distance perception, static and dynamic visual stimuli, and the general process of the psychology of the integrated experience. A survey of acousticians on their opinions about the auditory-visual aspects of the concert hall experience will be presented. [Work supported in part from the Veneklasen Research Foundation and Veneklasen Associates.

  2. The Role of Prediction In Perception: Evidence From Interrupted Visual Search

    PubMed Central

    Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro

    2014-01-01

    Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440

  3. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion

  4. Stochastic Resonance In Visual Perception

    NASA Astrophysics Data System (ADS)

    Simonotto, Enrico

    1996-03-01

    Stochastic resonance (SR) is a well established physical phenomenon wherein some measure of the coherence of a weak signal can be optimized by random fluctuations, or "noise" (K. Wiesenfeld and F. Moss, Nature), 373, 33 (1995). In all experiments to date the coherence has been measured using numerical analysis of the data, for example, signal-to-noise ratios obtained from power spectra. But, can this analysis be replaced by a perceptive task? Previously we had demonstrated this possibility with a numerical model of perceptual bistability applied to the interpretation of ambiguous figures(M. Riani and E. Simonotto, Phys. Rev. Lett.), 72, 3120 (1994). Here I describe an experiment wherein SR is detected in visual perception. A recognizible grayscale photograph was digitized and presented. The picture was then placed beneath a threshold. Every pixel for which the grayscale exceeded the threshold was painted white, and all others black. For large enough threshold, the picture is unrecognizable, but the addition of a random number to every pixel renders it interpretable(C. Seife and M. Roberts, The Economist), 336, 59, July 29 (1995). However the addition of dynamical noise to the pixels much enhances an observer's ability to interpret the picture. Here I report the results of psychophysics experiments wherein the effects of both the intensity of the noise and its correlation time were studied.

  5. Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation

    NASA Technical Reports Server (NTRS)

    O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.

    2006-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation

  6. Assessing speech perception in children with cochlear implants using a modified hybrid visual habituation procedure.

    PubMed

    Core, Cynthia; Brown, Janean W; Larsen, Michael D; Mahshie, James

    2014-01-01

    The objectives of this research were to determine whether an adapted version of a Hybrid Visual Habituation procedure could be used to assess speech perception of phonetic and prosodic features of speech (vowel height, lexical stress, and intonation) in individual pre-school-age children who use cochlear implants. Nine children ranging in age from 3;4 to 5;5 participated in this study. Children were prelingually deaf and used cochlear implants and had no other known disabilities. Children received two speech feature tests using an adaptation of a Hybrid Visual Habituation procedure. Seven of the nine children demonstrated perception of at least one speech feature using this procedure using results from a Bayesian linear regression analysis. At least one child demonstrated perception of each speech feature using this assessment procedure. An adapted version of the Hybrid Visual Habituation Procedure with an appropriate statistical analysis provides a way to assess phonetic and prosodicaspects of speech in pre-school-age children who use cochlear implants.

  7. The Posture of Putting One's Palms Together Modulates Visual Motion Event Perception.

    PubMed

    Saito, Godai; Gyoba, Jiro

    2018-02-01

    We investigated the effect of an observer's hand postures on visual motion perception using the stream/bounce display. When two identical visual objects move across collinear horizontal trajectories toward each other in a two-dimensional display, observers perceive them as either streaming or bouncing. In our previous study, we found that when observers put their palms together just below the coincidence point of the two objects, the percentage of bouncing responses increased, mainly depending on the proprioceptive information from their own hands. However, it remains unclear if the tactile or haptic (force) information produced by the postures mostly influences the stream/bounce perception. We solved this problem by changing the tactile and haptic information on the palms of the hands. Experiment 1 showed that the promotion of bouncing perception was observed only when the posture of directly putting one's palms together was used, while there was no effect when a brick was sandwiched between the participant's palms. Experiment 2 demonstrated that the strength of force used when putting the palms together had no effect on increasing bounce perception. Our findings indicate that the hands-induced bounce effect derives from the tactile information produced by the direct contact between both palms.

  8. Influence of age, sex, and education on the Visual Object and Space Perception Battery (VOSP) in a healthy normal elderly population.

    PubMed

    Herrera-Guzmán, I; Peña-Casanova, J; Lara, J P; Gudayol-Ferré, E; Böhm, P

    2004-08-01

    The assessment of visual perception and cognition forms an important part of any general cognitive evaluation. We have studied the possible influence of age, sex, and education on a normal elderly Spanish population (90 healthy subjects) in performance in visual perception tasks. To evaluate visual perception and cognition, we have used the subjects performance with The Visual Object and Space Perception Battery (VOSP). The test consists of 8 subtests: 4 measure visual object perception (Incomplete Letters, Silhouettes, Object Decision, and Progressive Silhouettes) while the other 4 measure visual space perception (Dot Counting, Position Discrimination, Number Location, and Cube Analysis). The statistical procedures employed were either simple or multiple linear regression analyses (subtests with normal distribution) and Mann-Whitney tests, followed by ANOVA with Scheffe correction (subtests without normal distribution). Age and sex were found to be significant modifying factors in the Silhouettes, Object Decision, Progressive Silhouettes, Position Discrimination, and Cube Analysis subtests. Educational level was found to be a significant predictor of function for the Silhouettes and Object Decision subtests. The results of the sample were adjusted in line with the differences observed. Our study also offers preliminary normative data for the administration of the VOSP to an elderly Spanish population. The results are discussed and compared with similar studies performed in different cultural backgrounds.

  9. The Effect of a Computerized Visual Perception and Visual-Motor Integration Training Program on Improving Chinese Handwriting of Children with Handwriting Difficulties

    ERIC Educational Resources Information Center

    Poon, K. W.; Li-Tsang, C. W .P.; Weiss, T. P. L.; Rosenblum, S.

    2010-01-01

    This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and…

  10. Short-Term Memory Affects Color Perception in Context

    PubMed Central

    Olkkonen, Maria; Allred, Sarah R.

    2014-01-01

    Color-based object selection — for instance, looking for ripe tomatoes in the market — places demands on both perceptual and memory processes: it is necessary to form a stable perceptual estimate of surface color from a variable visual signal, as well as to retain multiple perceptual estimates in memory while comparing objects. Nevertheless, perceptual and memory processes in the color domain are generally studied in separate research programs with the assumption that they are independent. Here, we demonstrate a strong failure of independence between color perception and memory: the effect of context on color appearance is substantially weakened by a short retention interval between a reference and test stimulus. This somewhat counterintuitive result is consistent with Bayesian estimation: as the precision of the representation of the reference surface and its context decays in memory, prior information gains more weight, causing the retained percepts to be drawn toward prior information about surface and context color. This interaction implies that to fully understand information processing in real-world color tasks, perception and memory need to be considered jointly. PMID:24475131

  11. Short-term memory affects color perception in context.

    PubMed

    Olkkonen, Maria; Allred, Sarah R

    2014-01-01

    Color-based object selection - for instance, looking for ripe tomatoes in the market - places demands on both perceptual and memory processes: it is necessary to form a stable perceptual estimate of surface color from a variable visual signal, as well as to retain multiple perceptual estimates in memory while comparing objects. Nevertheless, perceptual and memory processes in the color domain are generally studied in separate research programs with the assumption that they are independent. Here, we demonstrate a strong failure of independence between color perception and memory: the effect of context on color appearance is substantially weakened by a short retention interval between a reference and test stimulus. This somewhat counterintuitive result is consistent with Bayesian estimation: as the precision of the representation of the reference surface and its context decays in memory, prior information gains more weight, causing the retained percepts to be drawn toward prior information about surface and context color. This interaction implies that to fully understand information processing in real-world color tasks, perception and memory need to be considered jointly.

  12. Development of Visual Motion Perception for Prospective Control: Brain and Behavioral Studies in Infants

    PubMed Central

    Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.

    2016-01-01

    During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908

  13. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    PubMed

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  14. Cue competition affects temporal dynamics of edge-assignment in human visual cortex.

    PubMed

    Brooks, Joseph L; Palmer, Stephen E

    2011-03-01

    Edge-assignment determines the perception of relative depth across an edge and the shape of the closer side. Many cues determine edge-assignment, but relatively little is known about the neural mechanisms involved in combining these cues. Here, we manipulated extremal edge and attention cues to bias edge-assignment such that these two cues either cooperated or competed. To index their neural representations, we flickered figure and ground regions at different frequencies and measured the corresponding steady-state visual-evoked potentials (SSVEPs). Figural regions had stronger SSVEP responses than ground regions, independent of whether they were attended or unattended. In addition, competition and cooperation between the two edge-assignment cues significantly affected the temporal dynamics of edge-assignment processes. The figural SSVEP response peaked earlier when the cues causing it cooperated than when they competed, but sustained edge-assignment effects were equivalent for cooperating and competing cues, consistent with a winner-take-all outcome. These results provide physiological evidence that figure-ground organization involves competitive processes that can affect the latency of figural assignment.

  15. Spatial frequency supports the emergence of categorical representations in visual cortex during natural scene perception.

    PubMed

    Dima, Diana C; Perry, Gavin; Singh, Krish D

    2018-06-11

    In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. The economics of motion perception and invariants of visual sensitivity.

    PubMed

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  17. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  18. Video quality assessment using a statistical model of human visual speed perception.

    PubMed

    Wang, Zhou; Li, Qiang

    2007-12-01

    Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.

  19. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  20. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  1. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  2. Perceptions of Chemistry: Why Is the Common Perception of Chemistry, the Most Visual of Sciences, So Distorted?

    ERIC Educational Resources Information Center

    Habraken, Clarisse L.

    1996-01-01

    Highlights the need to reinvigorate chemistry education by means of the visual-spatial approach, an approach wholly in conformance with the way modern chemistry is thought about and practiced. Discusses the changing world, multiple intelligences, imagery, chemistry's pictorial language, and perceptions in chemistry. Presents suggestions on how to…

  3. Volumic visual perception: principally novel concept

    NASA Astrophysics Data System (ADS)

    Petrov, Valery

    1996-01-01

    The general concept of volumic view (VV) as a universal property of space is introduced. VV exists in every point of the universe where electromagnetic (EM) waves can reach and a point or a quasi-point receiver (detector) of EM waves can be placed. Classification of receivers is given for the first time. They are classified into three main categories: biological, man-made non-biological, and mathematically specified hypothetical receivers. The principally novel concept of volumic perception is introduced. It differs chiefly from the traditional concept which traces back to Euclid and pre-Euclidean times and much later to Leonardo da Vinci and Giovanni Battista della Porta's discoveries and practical stereoscopy as introduced by C. Wheatstone. The basic idea of novel concept is that humans and animals acquire volumic visual data flows in series rather than in parallel. In this case the brain is free from extremely sophisticated real time parallel processing of two volumic visual data flows in order to combine them. Such procedure seems hardly probable even for humans who are unable to combine two primitive static stereoscopic images in one quicker than in a few seconds. Some people are unable to perform this procedure at all.

  4. Developing effective serious games: the effect of background sound on visual fidelity perception with varying texture resolution.

    PubMed

    Rojas, David; Kapralos, Bill; Cristancho, Sayra; Collins, Karen; Hogue, Andrew; Conati, Cristina; Dubrowski, Adam

    2012-01-01

    Despite the benefits associated with virtual learning environments and serious games, there are open, fundamental issues regarding simulation fidelity and multi-modal cue interaction and their effect on immersion, transfer of knowledge, and retention. Here we describe the results of a study that examined the effect of ambient (background) sound on the perception of visual fidelity (defined with respect to texture resolution). Results suggest that the perception of visual fidelity is dependent on ambient sound and more specifically, white noise can have detrimental effects on our perception of high quality visuals. The results of this study will guide future studies that will ultimately aid in developing an understanding of the role that fidelity, and multi-modal interactions play with respect to knowledge transfer and retention for users of virtual simulations and serious games.

  5. Visual function affects prosocial behaviors in older adults.

    PubMed

    Teoli, Dac A; Smith, Merideth D; Leys, Monique J; Jain, Priyanka; Odom, J Vernon

    2016-02-01

    Eye-related pathological conditions such as glaucoma, diabetic retinopathy, and age-related macular degeneration commonly lead to decreased peripheral/central field, decreased visual acuity, and increased functional disability. We sought to answer if relationships exist between measures of visual function and reported prosocial behaviors in an older adult population with eye-related diagnoses. The sample consisted of adults, aged ≥ 60 years old, at an academic hospital's eye institute. Vision ranged from normal to severe impairment. Medical charts determined the visual acuities, ocular disease, duration of disease (DD), and visual fields (VF). Measures of giving help were via validated questionnaires on giving formal support (GFS) and giving informal support; measures of help received were perceived support (PS) and informal support received (ISR). ISR had subscales: tangible support (ISR-T), emotional support (ISR-E), and composite (ISR-C). Visual acuities of the better and worse seeing eyes were converted to LogMAR values. VF information converted to a 4-point rating scale of binocular field loss severity. DD was in years. Among 96 participants (mean age 73.28; range 60-94), stepwise regression indicated a relationship of visual variables to GFS (p < 0.05; Multiple R (2) = 0.1679 with acuity-better eye, VF rating, and DD), PS (p < 0.05; Multiple R (2) = 0.2254 with acuity-better eye), ISR-C (p < 0.05; Multiple R (2) = 0.041 with acuity-better eye), and ISR-T (p < 0.05; Multiple R (2) = 0.1421 with acuity-better eye). The findings suggest eye-related conditions can impact levels and perceptions of support exchanges. Our data reinforces the importance of visual function as an influence on prosocial behavior in older adults.

  6. The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.

    PubMed

    Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P

    2015-01-01

    Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.

  7. Visual tuning and metrical perception of realistic point-light dance movements.

    PubMed

    Su, Yi-Huang

    2016-03-07

    Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals' preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory 'subdivision effect', suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception.

  8. Visual tuning and metrical perception of realistic point-light dance movements

    PubMed Central

    Su, Yi-Huang

    2016-01-01

    Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals’ preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory ‘subdivision effect’, suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception. PMID:26947252

  9. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  10. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  11. Impact of language on development of auditory-visual speech perception.

    PubMed

    Sekiyama, Kaoru; Burnham, Denis

    2008-03-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.

  12. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    NASA Astrophysics Data System (ADS)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  13. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  14. Auditory-Visual Speech Perception in Three- and Four-Year-Olds and Its Relationship to Perceptual Attunement and Receptive Vocabulary

    ERIC Educational Resources Information Center

    Erdener, Dogu; Burnham, Denis

    2018-01-01

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…

  15. Cognitive issues in searching images with visual queries

    NASA Astrophysics Data System (ADS)

    Yu, ByungGu; Evens, Martha W.

    1999-01-01

    In this paper, we propose our image indexing technique and visual query processing technique. Our mental images are different from the actual retinal images and many things, such as personal interests, personal experiences, perceptual context, the characteristics of spatial objects, and so on, affect our spatial perception. These private differences are propagated into our mental images and so our visual queries become different from the real images that we want to find. This is a hard problem and few people have tried to work on it. In this paper, we survey the human mental imagery system, the human spatial perception, and discuss several kinds of visual queries. Also, we propose our own approach to visual query interpretation and processing.

  16. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  17. Differentiation of Competence and Affect Self-Perceptions in Elementary School Students: Extending Empirical Evidence

    ERIC Educational Resources Information Center

    Arens, A. Katrin; Hasselhorn, Marcus

    2015-01-01

    This study aimed to address two underexplored research questions regarding support for the separation between competence and affect self-perceptions due to differential relations to outcome criteria. First, it is tested whether higher relations between affect self-perceptions and effort than between competence self-perceptions and effort can also…

  18. Predictions penetrate perception: Converging insights from brain, behaviour and disorder

    PubMed Central

    O’Callaghan, Claire; Kveraga, Kestutis; Shine, James M; Adams, Reginald B.; Bar, Moshe

    2018-01-01

    It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains—including object recognition, contextual associations, cognitive biases and affective state—that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception. PMID:27222169

  19. Poor shape perception is the reason reaches-to-grasp are visually guided online.

    PubMed

    Lee, Young-Lim; Crabtree, Charles E; Norman, J Farley; Bingham, Geoffrey P

    2008-08-01

    Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.

  20. From optics to attention: visual perception in barn owls.

    PubMed

    Harmening, Wolf M; Wagner, Hermann

    2011-11-01

    Barn owls are nocturnal predators which have evolved specific sensory and morphological adaptations to a life in dim light. Here, some of the most fundamental properties of spatial vision in barn owls are reviewed. The eye with its tubular shape is rigidly integrated in the skull so that eye movements are very much restricted. The eyes are oriented frontally, allowing for a large binocular overlap. Accommodation, but not pupil dilation, is coupled between the two eyes. The retina is rod dominated and lacks a visible fovea. Retinal ganglion cells form a marked region of highest density that extends to a horizontally oriented visual streak. Behavioural visual acuity and contrast sensitivity are poor, although the optical quality of the ocular media is excellent. A low f-number allows high image quality at low light levels. Vernier acuity was found to be a hyperacute percept. Owls have global stereopsis with hyperacute stereo acuity thresholds. Neurons of the visual Wulst are sensitive to binocular disparities. Orientation based saliency was demonstrated in a visual-search experiment, and higher cognitive abilities were shown when the owl's were able to use illusory contours for object discrimination.

  1. Dance and Music in “Gangnam Style”: How Dance Observation Affects Meter Perception

    PubMed Central

    Lee, Kyung Myun; Barrett, Karen Chan; Kim, Yeonhwa; Lim, Yeoeun; Lee, Kyogu

    2015-01-01

    Dance and music often co-occur as evidenced when viewing choreographed dances or singers moving while performing. This study investigated how the viewing of dance motions shapes sound perception. Previous research has shown that dance reflects the temporal structure of its accompanying music, communicating musical meter (i.e. a hierarchical organization of beats) via coordinated movement patterns that indicate where strong and weak beats occur. Experiments here investigated the effects of dance cues on meter perception, hypothesizing that dance could embody the musical meter, thereby shaping participant reaction times (RTs) to sound targets occurring at different metrical positions.In experiment 1, participants viewed a video with dance choreography indicating 4/4 meter (dance condition) or a series of color changes repeated in sequences of four to indicate 4/4 meter (picture condition). A sound track accompanied these videos and participants reacted to timbre targets at different metrical positions. Participants had the slowest RT’s at the strongest beats in the dance condition only. In experiment 2, participants viewed the choreography of the horse-riding dance from Psy’s “Gangnam Style” in order to examine how a familiar dance might affect meter perception. Moreover, participants in this experiment were divided into a group with experience dancing this choreography and a group without experience. Results again showed slower RTs to stronger metrical positions and the group with experience demonstrated a more refined perception of metrical hierarchy. Results likely stem from the temporally selective division of attention between auditory and visual domains. This study has implications for understanding: 1) the impact of splitting attention among different sensory modalities, and 2) the impact of embodiment, on perception of musical meter. Viewing dance may interfere with sound processing, particularly at critical metrical positions, but embodied

  2. 'I didn't see that coming': simulated visual fields and driving hazard perception test performance.

    PubMed

    Glen, Fiona C; Smith, Nicholas D; Jones, Lee; Crabb, David P

    2016-09-01

    Evidence is limited regarding specific types of visual field loss associated with unsafe driving. We use novel gaze-contingent software to examine the effect of simulated visual field loss on computer-based driving hazard detection with the specific aim of testing the impact of scotomata located to the right and left of fixation. The 'hazard perception test' is a component of the UK driving licence examination, which measures speed of detecting 15 different hazards in a series of real-life driving films. We have developed a novel eye-tracking and computer set up capable of generating a realistic gaze-contingent scotoma simulation (GazeSS) overlaid on film content. Thirty drivers with healthy vision completed three versions of the hazard perception test in a repeated measures experiment. In two versions, GazeSS simulated a scotoma in the binocular field of view to the left or right of fixation. A third version was unmodified to establish baseline performance. Participants' mean baseline hazard perception test score was 51 ± 7 (out of 75). This reduced to 46 ± 9 and 46 ± 11 when completing the task with a binocular visual field defect located to the left and right of fixation, respectively. While the main effect of simulated visual field loss on performance was statistically significant (p = 0.007), there were no average differences in the experimental conditions where a scotoma was located in the binocular visual field to the right or left of fixation. Simulated visual field loss impairs driving hazard detection on a computer-based test. There was no statistically significant difference in average performance when the simulated scotoma was located to the right or left of fixation of the binocular visual field, but certain types of hazard caused more difficulties than others. © 2016 Optometry Australia.

  3. Emotional perception in eating disorders.

    PubMed

    Joos, Andreas A B; Cabrillac, Emmanuelle; Hartmann, Armin; Wirsching, Michael; Zeeck, Almut

    2009-05-01

    It remains an open question whether there are basic emotional perception and emotional processing deficits in eating disorders (ED). The aim of this study was to explore deficits in emotional perception in restrictive anorexia nervosa (AN-R) and bulimia nervosa (BN), using visual emotional stimuli. Thirty-four patients with ED (19 with BN and 15 with AN-R) were compared with 25 controls. Visual stimuli from the international affective picture system were used. Patients with AN-R showed increased fear when confronted with stimuli containing anger, whereas patients with BN showed a tendency towards decreased fear. There were no other fundamental differences in the emotional perception of fear, happiness, sadness, and anger. The finding of increased fear when exposed to the emotion of anger might be attributed to introversion and conflict avoidance of anorectic patients. No other basic deficiency of emotional perception was found.

  4. University Teachers' Perception of Inclusion of Visually Impaired in Ghanaian Universities

    ERIC Educational Resources Information Center

    Mamah, Vincent; Deku, Prosper; Darling, Sharon M.; Avoke, Selete K.

    2011-01-01

    This study was undertaken to examine the university teachers' perception of including students with Visual Impairment (VI) in the public universities of Ghana. The sample consisted of 110 teachers from the University of Cape Coast (UCC), the University of Education, Winneba, (UEW), and the University of Ghana (UG). Data were collected through…

  5. Lip movements affect infants' audiovisual speech perception.

    PubMed

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  6. [Visual perception of Kanji characters and complicated figures. Part 3. Visual P300 event-related potentials in patients with attention deficit/hyperactivity disorders].

    PubMed

    Shirane, Seiko; Inagaki, Masumi; Sata, Yoshimi; Kaga, Makiko

    2004-07-01

    In order to evaluate visual perception, the P300 event-related potentials (ERPs) for visual oddball tasks were recorded in 11 patients with attention deficit/hyperactivity disorders (AD/HD), 12 with mental retardation (MR) and 14 age-matched healthy controls. With the aim of revealing trial-to-trial variabilities which are neglected by investigating averaged ERPs, single sweep P300s (ss-P300s) were assessed in addition to averaged P300. There were no significant differences of averaged P300 latency and amplitude between controls and AD/HD patients. AD/HD patients showed an increased variability in the amplitude of ss-P300s, while MR patient showed an increased variability in latency. These findings suggest that in AD/HD patients general attention is impaired to a larger extent than selective attention and visual perception.

  7. Influential sources affecting Bangkok adolescent body image perceptions.

    PubMed

    Thianthai, Chulanee

    2006-01-01

    The study of body image-related problems in non-Western countries is still very limited. Thus, this study aims to identify the main influential sources and show how they affect the body image perceptions of Bangkok adolescents. The researcher recruited 400 Thai male and female adolescents in Bangkok, attending high school to freshmen level, ranging from 16-19 years, to participate in this study. Survey questionnaires were distributed to every student and follow-up interviews conducted with 40 students. The findings showed that there are eight main influential sources respectively ranked from the most influential to the least influential: magazines, television, peer group, familial, fashion trend, the opposite gender, self-realization and health knowledge. Similar to those studies conducted in Western countries, more than half of the total percentage was the influence of mass media and peer groups. Bangkok adolescents also internalized Western ideal beauty through these mass media channels. Alike studies conducted in the West, there was similarities in the process of how these influential sources affect Bangkok adolescent body image perception, with the exception of familial source. In conclusion, taking the approach of identifying the main influential sources and understanding how they affect adolescent body image perceptions can help prevent adolescents from having unhealthy views and taking risky measures toward their bodies. More studies conducted in non-Western countries are needed in order to build a cultural sensitive program, catered to the body image problems occurring in adolescents within that particular society.

  8. Visual Perception Associated With Diabetes Mellitus

    NASA Astrophysics Data System (ADS)

    Suaste, Ernesto

    2004-09-01

    We designs and implement an instrumental methodology of analysis of the pupillary response to chromatic stimuli in order to observe the changes of pupillary area in the process of contraction and dilation in diabetic patients. Visual stimuli were used in the visible spectrum (400nm-650nm). Three different programs were used to determinate the best stimulation in order to obtain the better and contrasted pupillary response for diagnosis of the visual perception of colors. The stimulators PG0, PG12 and PG20 were designed in our laboratory. The test was carried out with 44 people, 33 men, 10 women and a boy (22-52 and 6 years), 12 with the stimulator PG0, 21 with PG12 and 17 with PG20, 7 subjects participated in more than a test. According to the plates of Ishihara, 40 of those subjects have normal vision to the colors, one subject suffers dicromasy (inability to differ or to perceive red and green) and while three of them present deficiencies to observe the blue and red spectrum (they suffer type II diabetes mellitus). With this instrumental methodology, we pretend to obtain an indication in the pupillary variability for the early diagnose of the diabetes mellitus, as well as a monitoring instrument for it.

  9. Linguistic experience and audio-visual perception of non-native fricatives.

    PubMed

    Wang, Yue; Behne, Dawn M; Jiang, Haisheng

    2008-09-01

    This study examined the effects of linguistic experience on audio-visual (AV) perception of non-native (L2) speech. Canadian English natives and Mandarin Chinese natives differing in degree of English exposure [long and short length of residence (LOR) in Canada] were presented with English fricatives of three visually distinct places of articulation: interdentals nonexistent in Mandarin and labiodentals and alveolars common in both languages. Stimuli were presented in quiet and in a cafe-noise background in four ways: audio only (A), visual only (V), congruent AV (AVc), and incongruent AV (AVi). Identification results showed that overall performance was better in the AVc than in the A or V condition and better in quiet than in cafe noise. While the Mandarin long LOR group approximated the native English patterns, the short LOR group showed poorer interdental identification, more reliance on visual information, and greater AV-fusion with the AVi materials, indicating the failure of L2 visual speech category formation with the short LOR non-natives and the positive effects of linguistic experience with the long LOR non-natives. These results point to an integrated network in AV speech processing as a function of linguistic background and provide evidence to extend auditory-based L2 speech learning theories to the visual domain.

  10. Method and simulation to study 3D crosstalk perception

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Blondé, Laurent; Huynh-Thu, Quan; Vienne, Cyril; Doyen, Didier

    2012-03-01

    To various degrees, all modern 3DTV displays suffer from crosstalk, which can lead to a decrease of both visual quality and visual comfort, and also affect perception of depth. In the absence of a perfect 3D display technology, crosstalk has to be taken into account when studying perception of 3D stereoscopic content. In order to improve 3D presentation systems and understand how to efficiently eliminate crosstalk, it is necessary to understand its impact on human perception. In this paper, we present a practical method to study the perception of crosstalk. The approach consists of four steps: (1) physical measurements of a 3DTV, (2) building of a crosstalk surface based on those measurements and representing specifically the behavior of that 3TV, (3) manipulation of the crosstalk function and application on reference images to produce test images degraded by crosstalk in various ways, and (4) psychophysical tests. Our approach allows both a realistic representation of the behavior of a 3DTV and the easy manipulation of its resulting crosstalk in order to conduct psycho-visual experiments. Our approach can be used in all studies requiring the understanding of how crosstalk affects perception of stereoscopic content and how it can be corrected efficiently.

  11. Subjective visual perception: from local processing to emergent phenomena of brain activity.

    PubMed

    Panagiotaropoulos, Theofanis I; Kapoor, Vishal; Logothetis, Nikos K

    2014-05-05

    The combination of electrophysiological recordings with ambiguous visual stimulation made possible the detection of neurons that represent the content of subjective visual perception and perceptual suppression in multiple cortical and subcortical brain regions. These neuronal populations, commonly referred to as the neural correlates of consciousness, are more likely to be found in the temporal and prefrontal cortices as well as the pulvinar, indicating that the content of perceptual awareness is represented with higher fidelity in higher-order association areas of the cortical and thalamic hierarchy, reflecting the outcome of competitive interactions between conflicting sensory information resolved in earlier stages. However, despite the significant insights into conscious perception gained through monitoring the activities of single neurons and small, local populations, the immense functional complexity of the brain arising from correlations in the activity of its constituent parts suggests that local, microscopic activity could only partially reveal the mechanisms involved in perceptual awareness. Rather, the dynamics of functional connectivity patterns on a mesoscopic and macroscopic level could be critical for conscious perception. Understanding these emergent spatio-temporal patterns could be informative not only for the stability of subjective perception but also for spontaneous perceptual transitions suggested to depend either on the dynamics of antagonistic ensembles or on global intrinsic activity fluctuations that may act upon explicit neural representations of sensory stimuli and induce perceptual reorganization. Here, we review the most recent results from local activity recordings and discuss the potential role of effective, correlated interactions during perceptual awareness.

  12. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  13. Visual Images of Subjective Perception of Time in a Literary Text

    ERIC Educational Resources Information Center

    Nesterik, Ella V.; Issina, Gaukhar I.; Pecherskikh, Taliya F.; Belikova, Oxana V.

    2016-01-01

    The article is devoted to the subjective perception of time, or psychological time, as a text category and a literary image. It focuses on the visual images that are characteristic of different types of literary time--accelerated, decelerated and frozen (vanished). The research is based on the assumption that the category of subjective perception…

  14. Illusions of having small or large invisible bodies influence visual perception of object size

    PubMed Central

    van der Hoort, Björn; Ehrsson, H. Henrik

    2016-01-01

    The size of our body influences the perceived size of the world so that objects appear larger to children than to adults. The mechanisms underlying this effect remain unclear. It has been difficult to dissociate visual rescaling of the external environment based on an individual’s visible body from visual rescaling based on a central multisensory body representation. To differentiate these potential causal mechanisms, we manipulated body representation without a visible body by taking advantage of recent developments in body representation research. Participants experienced the illusion of having a small or large invisible body while object-size perception was tested. Our findings show that the perceived size of test-objects was determined by the size of the invisible body (inverse relation), and by the strength of the invisible body illusion. These findings demonstrate how central body representation directly influences visual size perception, without the need for a visible body, by rescaling the spatial representation of the environment. PMID:27708344

  15. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  16. Negative affect, interpersonal perception, and binge eating behavior: An experience sampling study.

    PubMed

    Ambwani, Suman; Roche, Michael J; Minnick, Alyssa M; Pincus, Aaron L

    2015-09-01

    Etiological and maintenance models for disordered eating highlight the salience of negative affect and interpersonal dysfunction. This study employed a 14-day experience sampling procedure to assess the impact of negative affect and interpersonal perceptions on binge eating behavior. Young adult women (N = 40) with recurrent binge eating and significant clinical impairment recorded their mood, interpersonal behavior, and eating behaviors at six stratified semirandom intervals daily through the use of personal digital assistants. Although momentary negative affect was associated with binge eating behavior, average levels of negative affect over the experience sampling period were not, and interpersonal problems moderated the relationship between negative affect and binge eating. Interpersonal problems also intensified the association between momentary interpersonal perceptions and binge eating behavior. Lagged analyses indicated that previous levels of negative affect and interpersonal style also influence binge eating. The study findings suggest there may be important differences in how dispositional versus momentary experiences of negative affect are associated with binge eating. Results also highlight the importance of interpersonal problems for understanding relationships among negative affect, interpersonal perception, and binge eating behavior. These results offer several possibilities for attending to affective and interpersonal functioning in clinical practice. © 2015 Wiley Periodicals, Inc.

  17. Human alteration of the rural landscape: Variations in visual perception

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloquell-Ballester, Vicente-Agustin, E-mail: cloquell@dpi.upv.es; Carmen Torres-Sibille, Ana del; Cloquell-Ballester, Victor-Andres

    2012-01-15

    The objective of this investigation is to evaluate how visual perception varies as the rural landscape is altered by human interventions of varying character. An experiment is carried out using Semantic Differential Analysis to analyse the effect of the character and the type of the intervention on perception. Interventions are divided into elements of 'permanent industrial character', 'elements of permanent rural character' and 'elements of temporary character', and these categories are sub-divided into smaller groups according to the type of development. To increase the reliability of the results, the Intraclass Correlation Coefficient tool, is applied to validate the semantic spacemore » of the perceptual responses and to determine the number of subjects required for a reliable evaluation of the scenes.« less

  18. Dissociation between the neural correlates of conscious face perception and visual attention.

    PubMed

    Navajas, Joaquin; Nitka, Aleksander W; Quian Quiroga, Rodrigo

    2017-08-01

    Given the higher chance to recognize attended compared to unattended stimuli, the specific neural correlates of these two processes, attention and awareness, tend to be intermingled in experimental designs. In this study, we dissociated the neural correlates of conscious face perception from the effects of visual attention. To do this, we presented faces at the threshold of awareness and manipulated attention through the use of exogenous prestimulus cues. We show that the N170 component, a scalp EEG marker of face perception, was modulated independently by attention and by awareness. An earlier P1 component was not modulated by either of the two effects and a later P3 component was indicative of awareness but not of attention. These claims are supported by converging evidence from (a) modulations observed in the average evoked potentials, (b) correlations between neural and behavioral data at the single-subject level, and (c) single-trial analyses. Overall, our results show a clear dissociation between the neural substrates of attention and awareness. Based on these results, we argue that conscious face perception is triggered by a boost in face-selective cortical ensembles that can be modulated by, but are still independent from, visual attention. © 2017 Society for Psychophysiological Research.

  19. [The impact of psilocybin on visual perception and spatial orientation--neuropsychological approach].

    PubMed

    Jastrzebski, Mikolaj; Bala, Aleksandra

    2013-01-01

    Psilocybin is a substance of natural origin, occurring in hallucinogenic mushrooms (most common in the Psilocybe family). After its synthesis in 1958 research began on its psychoactive properties, particularly strong effects on visual perception and spatial orientation. Due to the very broad spectrum of psilocybin effects research began on the different ranges of its actions--including the effect on physiological processes (such as eye saccada movements). Neuro-imaging and neurophysiological research (positron emission tomography-PET and electroencephalography-EEG), indicate a change in the rate of metabolism of the brain and desync cerebral hemispheres. Experimental studies show the changes in visual perception and distortion from psilocybin in the handwriting style of patients examined. There are widely described subjective experiences reported by the subjects. There are also efforts to apply testing via questionnaire on people under the influence of psilocybin, in the context of the similarity of psilocybin-induced state to the initial stages of schizophrenia, as well as research aimed at creating an 'artificial' model of the disease.

  20. The perception of naturalness correlates with low-level visual features of environmental scenes.

    PubMed

    Berman, Marc G; Hout, Michael C; Kardan, Omid; Hunter, MaryCarol R; Yourganov, Grigori; Henderson, John M; Hanayik, Taylor; Karimi, Hossein; Jonides, John

    2014-01-01

    Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.

  1. Perception of Elementary Students of Visuals on the Web.

    ERIC Educational Resources Information Center

    El-Tigi, Manal A.; And Others

    The way information is visually designed and synthesized greatly affects how people understand and use that information. Increased use of the World Wide Web as a teaching tool makes it imperative to question how visual/verbal information presented via the Web can increase or restrict understanding. The purpose of this study was to examine…

  2. An empirical investigation of the visual rightness theory of picture perception.

    PubMed

    Locher, Paul J

    2003-10-01

    This research subjected the visual rightness theory of picture perception to experimental scrutiny. It investigated the ability of adults untrained in the visual arts to discriminate between reproductions of original abstract and representational paintings by renowned artists from two experimentally manipulated less well-organized versions of each art stimulus. Perturbed stimuli contained either minor or major disruptions in the originals' principal structural networks. It was found that participants were significantly more successful in discriminating between originals and their highly altered, but not slightly altered, perturbation than expected by chance. Accuracy of detection was found to be a function of style of painting and a viewer's way of thinking about a work as determined from their verbal reactions to it. Specifically, hit rates for originals were highest for abstract works when participants focused on their compositional style and form and highest for representational works when their content and realism were the focus of attention. Findings support the view that visually right (i.e., "good") compositions have efficient structural organizations that are visually salient to viewers who lack formal training in the visual arts.

  3. To See or Not to See: Analyzing Difficulties in Geometry from the Perspective of Visual Perception

    ERIC Educational Resources Information Center

    Gal, Hagar; Linchevski, Liora

    2010-01-01

    In this paper, we consider theories about processes of visual perception and perception-based knowledge representation (VPR) in order to explain difficulties encountered in figural processing in junior high school geometry tasks. In order to analyze such difficulties, we take advantage of the following perspectives of VPR: (1) Perceptual…

  4. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear

    PubMed Central

    Clevis, Krien; Hagoort, Peter

    2011-01-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540

  6. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear.

    PubMed

    Willems, Roel M; Clevis, Krien; Hagoort, Peter

    2011-09-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.

  7. Seeing Is the Hardest Thing to See: Using Illusions to Teach Visual Perception

    ERIC Educational Resources Information Center

    Riener, Cedar

    2015-01-01

    This chapter describes three examples of using illusions to teach visual perception. The illusions present ways for students to change their perspective regarding how their eyes work and also offer opportunities to question assumptions regarding their approach to knowledge.

  8. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1.

    PubMed

    Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu

    2014-04-23

    How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three

  9. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1

    PubMed Central

    2014-01-01

    Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and

  10. Quarrelsome behavior in borderline personality disorder: influence of behavioral and affective reactivity to perceptions of others.

    PubMed

    Sadikaj, Gentiana; Moskowitz, D S; Russell, Jennifer J; Zuroff, David C; Paris, Joel

    2013-02-01

    We examined how the amplification of 3 within-person processes (behavioral reactivity to interpersonal perceptions, affect reactivity to interpersonal perceptions, and behavioral reactivity to a person's own affect) accounts for greater quarrelsome behavior among individuals with borderline personality disorder (BPD). Using an event-contingent recording (ECR) methodology, individuals with BPD (N = 38) and community controls (N = 31) reported on their negative affect, quarrelsome behavior, and perceptions of the interaction partner's agreeable-quarrelsome behavior in interpersonal events during a 20-day period. Behavioral reactivity to negative affect was similar in both groups. However, behavioral reactivity and affect reactivity to interpersonal perceptions were elevated in individuals with BPD relative to community controls; specifically, individuals with BPD reported more quarrelsome behavior and more negative affect during interactions in which they perceived others as more cold-quarrelsome. Greater negative affect reactivity to perceptions of other's cold-quarrelsome behavior partly accounted for the increased quarrelsome behavior reported by individuals with BPD during these interactions. This pattern of results suggests a cycle in which the perception of cold-quarrelsome behavior in others triggers elevated negative affect and quarrelsome behavior in individuals with BPD, which subsequently led to more quarrelsome behavior from their interaction partners, which leads to perceptions of others as cold-quarrelsomeness, which begins the cycle anew. 2013 APA, all rights reserved

  11. Visual Memories Bypass Normalization.

    PubMed

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  12. The Role of Affective and Cognitive Individual Differences in Social Perception.

    PubMed

    Aquino, Antonio; Haddock, Geoffrey; Maio, Gregory R; Wolf, Lukas J; Alparone, Francesca R

    2016-06-01

    Three studies explored the connection between social perception processes and individual differences in the use of affective and cognitive information in relation to attitudes. Study 1 revealed that individuals high in need for affect (NFA) accentuated differences in evaluations of warm and cold traits, whereas individuals high in need for cognition (NFC) accentuated differences in evaluations of competent and incompetent traits. Study 2 revealed that individual differences in NFA predicted liking of warm or cold targets, whereas individual differences in NFC predicted perceptions of competent or incompetent targets. Furthermore, the effects of NFA and NFC were independent of structural bases and meta-bases of attitudes. Study 3 revealed that differences in the evaluation of warm and cold traits mediated the effects of NFA and NFC on liking of targets. The implications for social perception processes and for individual differences in affect-cognition are discussed. © 2016 by the Society for Personality and Social Psychology, Inc.

  13. Confinement has no effect on visual space perception: The results of the Mars-500 experiment.

    PubMed

    Sikl, Radovan; Simeček, Michal

    2014-02-01

    People confined to a closed space live in a visual environment that differs from a natural open-space environment in several respects. The view is restricted to no more than a few meters, and nearby objects cannot be perceived relative to the position of a horizon. Thus, one might expect to find changes in visual space perception as a consequence of the prolonged experience of confinement. The subjects in our experimental study were participants of the Mars-500 project and spent nearly a year and a half isolated from the outside world during a simulated mission to Mars. The participants were presented with a battery of computer-based psychophysical tests examining their performance on various 3-D perception tasks, and we monitored changes in their perceptual performance throughout their confinement. Contrary to our expectations, no serious effect of the confinement on the crewmembers' 3-D perception was observed in any experiment. Several interpretations of these findings are discussed, including the possibilities that (1) the crewmembers' 3-D perception really did not change significantly, (2) changes in 3-D perception were manifested in the precision rather than the accuracy of perceptual judgments, and/or (3) the experimental conditions and the group sample were problematic.

  14. Ventral and dorsal streams processing visual motion perception (FDG-PET study)

    PubMed Central

    2012-01-01

    Background Earlier functional imaging studies on visually induced self-motion perception (vection) disclosed a bilateral network of activations within primary and secondary visual cortex areas which was combined with signal decreases, i.e., deactivations, in multisensory vestibular cortex areas. This finding led to the concept of a reciprocal inhibitory interaction between the visual and vestibular systems. In order to define areas involved in special aspects of self-motion perception such as intensity and duration of the perceived circular vection (CV) or the amount of head tilt, correlation analyses of the regional cerebral glucose metabolism, rCGM (measured by fluorodeoxyglucose positron-emission tomography, FDG-PET) and these perceptual covariates were performed in 14 healthy volunteers. For analyses of the visual-vestibular interaction, the CV data were compared to a random dot motion stimulation condition (not inducing vection) and a control group at rest (no stimulation at all). Results Group subtraction analyses showed that the visual-vestibular interaction was modified during CV, i.e., the activations within the cerebellar vermis and parieto-occipital areas were enhanced. The correlation analysis between the rCGM and the intensity of visually induced vection, experienced as body tilt, showed a relationship for areas of the multisensory vestibular cortical network (inferior parietal lobule bilaterally, anterior cingulate gyrus), the medial parieto-occipital cortex, the frontal eye fields and the cerebellar vermis. The “earlier” multisensory vestibular areas like the parieto-insular vestibular cortex and the superior temporal gyrus did not appear in the latter analysis. The duration of perceived vection after stimulus stop was positively correlated with rCGM in medial temporal lobe areas bilaterally, which included the (para-)hippocampus, known to be involved in various aspects of memory processing. The amount of head tilt was found to be positively

  15. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy.

    PubMed

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh

    2012-07-19

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups.

  16. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy

    PubMed Central

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh

    2012-01-01

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  17. Is the Lateralized Categorical Perception of Color a Situational Effect of Language on Color Perception?

    PubMed

    Zhong, Weifang; Li, You; Huang, Yulan; Li, He; Mo, Lei

    2018-01-01

    This study investigated whether and how a person's varied series of lexical categories corresponding to different discriminatory characteristics of the same colors affect his or her perception of colors. In three experiments, Chinese participants were primed to categorize four graduated colors-specifically dark green, light green, light blue, and dark blue-into green and blue; light color and dark color; and dark green, light green, light blue, and dark blue. The participants were then required to complete a visual search task. Reaction times in the visual search task indicated that different lateralized categorical perceptions (CPs) of color corresponded to the various priming situations. These results suggest that all of the lexical categories corresponding to different discriminatory characteristics of the same colors can influence people's perceptions of colors and that color perceptions can be influenced differently by distinct types of lexical categories depending on the context. Copyright © 2017 Cognitive Science Society, Inc.

  18. [Visual perception of Japanese characters and complicated figures: developmental changes of visual P300 event-related potentials].

    PubMed

    Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko

    2002-07-01

    In order to evaluate developmental change of visual perception, the P300 event-related potentials (ERPs) of visual oddball task were recorded in 34 healthy volunteers ranging from 7 to 37 years of age. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. Visual P300 was dominant at parietal area in almost all subjects. There was a significant difference of P300 latency among the three tasks. Reaction time to the both kind of Kanji tasks were significantly shorter than those to the complicated figure task. P300 latencies to the familiar Kanji, unfamiliar Kanji and figure stimuli decreased until 25.8, 26.9 and 29.4 years of age, respectively, and regression analysis revealed that a positive quadratic function could be fitted to the data. Around 9 years of age, the P300 latency/age slope was largest in the unfamiliar Kanji task. These findings suggest that visual P300 development depends on both the complexity of the tasks and specificity of the stimuli, which might reflect the variety in visual information processing.

  19. From elements to perception: local and global processing in visual neurons.

    PubMed

    Spillmann, L

    1999-01-01

    Gestalt psychologists in the early part of the century challenged psychophysical notions that perceptual phenomena can be understood from a punctate (atomistic) analysis of the elements present in the stimulus. Their ideas slowed later attempts to explain vision in terms of single-cell recordings from individual neurons. A rapprochement between Gestalt phenomenology and neurophysiology seemed unlikely when the first ECVP was held in Marburg, Germany, in 1978. Since that time, response properties of neurons have been discovered that invite an interpretation of visual phenomena (including illusions) in terms of neuronal processing by long-range interactions, as first proposed by Mach and Hering in the last century. This article traces a personal journey into the early days of neurophysiological vision research to illustrate the progress that has taken place from the first attempts to correlate single-cell responses with visual perceptions. Whereas initially the receptive-field properties of individual classes of cells--e.g., contrast, wavelength, orientation, motion, disparity, and spatial-frequency detectors--were used to account for relatively simple visual phenomena, nowadays complex perceptions are interpreted in terms of long-range interactions, involving many neurons. This change in paradigm from local to global processing was made possible by recent findings, in the cortex, on horizontal interactions and backward propagation (feedback loops) in addition to classical feedforward processing. These mechanisms are exemplified by studies of the tilt effect and tilt aftereffect, direction-specific motion adaptation, illusory contours, filling-in and fading, figure--ground segregation by orientation and motion contrast, and pop-out in dynamic visual-noise patterns. Major questions for future research and a discussion of their epistemological implications conclude the article.

  20. Steady-state visually evoked potential correlates of human body perception.

    PubMed

    Giabbiconi, Claire-Marie; Jurilj, Verena; Gruber, Thomas; Vocks, Silja

    2016-11-01

    In cognitive neuroscience, interest in the neuronal basis underlying the processing of human bodies is steadily increasing. Based on functional magnetic resonance imaging studies, it is assumed that the processing of pictures of human bodies is anchored in a network of specialized brain areas comprising the extrastriate and the fusiform body area (EBA, FBA). An alternative to examine the dynamics within these networks is electroencephalography, more specifically so-called steady-state visually evoked potentials (SSVEPs). In SSVEP tasks, a visual stimulus is presented repetitively at a predefined flickering rate and typically elicits a continuous oscillatory brain response at this frequency. This brain response is characterized by an excellent signal-to-noise ratio-a major advantage for source reconstructions. The main goal of present study was to demonstrate the feasibility of this method to study human body perception. To that end, we presented pictures of bodies and contrasted the resulting SSVEPs to two control conditions, i.e., non-objects and pictures of everyday objects (chairs). We found specific SSVEPs amplitude differences between bodies and both control conditions. Source reconstructions localized the SSVEP generators to a network of temporal, occipital and parietal areas. Interestingly, only body perception resulted in activity differences in middle temporal and lateral occipitotemporal areas, most likely reflecting the EBA/FBA.

  1. Egocentric Direction and Position Perceptions are Dissociable Based on Only Static Lane Edge Information

    PubMed Central

    Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune

    2015-01-01

    When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not

  2. To what extent do Gestalt grouping principles influence tactile perception?

    PubMed

    Gallace, Alberto; Spence, Charles

    2011-07-01

    Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.

  3. Subjective visual perception: from local processing to emergent phenomena of brain activity

    PubMed Central

    Panagiotaropoulos, Theofanis I.; Kapoor, Vishal; Logothetis, Nikos K.

    2014-01-01

    The combination of electrophysiological recordings with ambiguous visual stimulation made possible the detection of neurons that represent the content of subjective visual perception and perceptual suppression in multiple cortical and subcortical brain regions. These neuronal populations, commonly referred to as the neural correlates of consciousness, are more likely to be found in the temporal and prefrontal cortices as well as the pulvinar, indicating that the content of perceptual awareness is represented with higher fidelity in higher-order association areas of the cortical and thalamic hierarchy, reflecting the outcome of competitive interactions between conflicting sensory information resolved in earlier stages. However, despite the significant insights into conscious perception gained through monitoring the activities of single neurons and small, local populations, the immense functional complexity of the brain arising from correlations in the activity of its constituent parts suggests that local, microscopic activity could only partially reveal the mechanisms involved in perceptual awareness. Rather, the dynamics of functional connectivity patterns on a mesoscopic and macroscopic level could be critical for conscious perception. Understanding these emergent spatio-temporal patterns could be informative not only for the stability of subjective perception but also for spontaneous perceptual transitions suggested to depend either on the dynamics of antagonistic ensembles or on global intrinsic activity fluctuations that may act upon explicit neural representations of sensory stimuli and induce perceptual reorganization. Here, we review the most recent results from local activity recordings and discuss the potential role of effective, correlated interactions during perceptual awareness. PMID:24639588

  4. Cortical Double-Opponent Cells in Color Perception: Perceptual Scaling and Chromatic Visual Evoked Potentials.

    PubMed

    Nunez, Valerie; Shapley, Robert M; Gordon, James

    2018-01-01

    In the early visual cortex V1, there are currently only two known neural substrates for color perception: single-opponent and double-opponent cells. Our aim was to explore the relative contributions of these neurons to color perception. We measured the perceptual scaling of color saturation for equiluminant color checkerboard patterns (designed to stimulate double-opponent neurons preferentially) and uniformly colored squares (designed to stimulate only single-opponent neurons) at several cone contrasts. The spatially integrative responses of single-opponent neurons would produce the same response magnitude for checkerboards as for uniform squares of the same space-averaged cone contrast. However, perceived saturation of color checkerboards was higher than for the corresponding squares. The perceptual results therefore imply that double-opponent cells are involved in color perception of patterns. We also measured the chromatic visual evoked potential (cVEP) produced by the same stimuli; checkerboard cVEPs were much larger than those for corresponding squares, implying that double-opponent cells also contribute to the cVEP response. The total Fourier power of the cVEP grew sublinearly with cone contrast. However, the 6-Hz Fourier component's power grew linearly with contrast-like saturation perception. This may also indicate that cortical coding of color depends on response dynamics.

  5. Cortical Double-Opponent Cells in Color Perception: Perceptual Scaling and Chromatic Visual Evoked Potentials

    PubMed Central

    Shapley, Robert M.; Gordon, James

    2018-01-01

    In the early visual cortex V1, there are currently only two known neural substrates for color perception: single-opponent and double-opponent cells. Our aim was to explore the relative contributions of these neurons to color perception. We measured the perceptual scaling of color saturation for equiluminant color checkerboard patterns (designed to stimulate double-opponent neurons preferentially) and uniformly colored squares (designed to stimulate only single-opponent neurons) at several cone contrasts. The spatially integrative responses of single-opponent neurons would produce the same response magnitude for checkerboards as for uniform squares of the same space-averaged cone contrast. However, perceived saturation of color checkerboards was higher than for the corresponding squares. The perceptual results therefore imply that double-opponent cells are involved in color perception of patterns. We also measured the chromatic visual evoked potential (cVEP) produced by the same stimuli; checkerboard cVEPs were much larger than those for corresponding squares, implying that double-opponent cells also contribute to the cVEP response. The total Fourier power of the cVEP grew sublinearly with cone contrast. However, the 6-Hz Fourier component’s power grew linearly with contrast-like saturation perception. This may also indicate that cortical coding of color depends on response dynamics. PMID:29375753

  6. Psychopathic traits affect the visual exploration of facial expressions.

    PubMed

    Boll, Sabrina; Gamer, Matthias

    2016-05-01

    Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. [Social behavior, musicality and visual perception in monogloid children (author's transl)].

    PubMed

    Rabensteiner, B

    1975-01-01

    Forty-nine mongoloid and 48 non-mongol test persons of equivalent age and intelligence were selected and studied with respect to social behavior, speech disorders (observation of behavior), musicality and visual perception. There were significant differences in favor of the mongols with respect to social adaption. Speech disorders of all kinds occurred significantly more frequently in mongol children; stuttering was significantly more frequent in the boys. The mongol group did significantly better in the musicality test; the difference in the rhythmical part was highly significant. The average differences in the capacity for visual discrimination of colors, geometrical forms and the spatial relationship of geometrical forms were not significant.

  8. Categorical Perception of Affective and Linguistic Facial Expressions

    ERIC Educational Resources Information Center

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  9. Visual Memories Bypass Normalization

    PubMed Central

    Bloem, Ilona M.; Watanabe, Yurika L.; Kibbe, Melissa M.; Ling, Sam

    2018-01-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores—neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation. PMID:29596038

  10. Playing the electric light orchestra—how electrical stimulation of visual cortex elucidates the neural basis of perception

    PubMed Central

    Cicmil, Nela; Krug, Kristine

    2015-01-01

    Vision research has the potential to reveal fundamental mechanisms underlying sensory experience. Causal experimental approaches, such as electrical microstimulation, provide a unique opportunity to test the direct contributions of visual cortical neurons to perception and behaviour. But in spite of their importance, causal methods constitute a minority of the experiments used to investigate the visual cortex to date. We reconsider the function and organization of visual cortex according to results obtained from stimulation techniques, with a special emphasis on electrical stimulation of small groups of cells in awake subjects who can report their visual experience. We compare findings from humans and monkeys, striate and extrastriate cortex, and superficial versus deep cortical layers, and identify a number of revealing gaps in the ‘causal map′ of visual cortex. Integrating results from different methods and species, we provide a critical overview of the ways in which causal approaches have been used to further our understanding of circuitry, plasticity and information integration in visual cortex. Electrical stimulation not only elucidates the contributions of different visual areas to perception, but also contributes to our understanding of neuronal mechanisms underlying memory, attention and decision-making. PMID:26240421

  11. Perception, memory and aesthetics of indeterminate art.

    PubMed

    Ishai, Alumit; Fairhall, Scott L; Pepperell, Robert

    2007-07-12

    Indeterminate art, in which familiar objects are only suggestive, invokes a perceptual conundrum as apparently detailed and vivid images resist identification. We hypothesized that compared with paintings that depict meaningful content, object recognition in indeterminate images would be delayed, and tested whether aesthetic affect depends on meaningful content. Subjects performed object recognition and judgment of aesthetic affect tasks. Response latencies were significantly longer for indeterminate images and subjects perceived recognizable objects in 24% of these paintings. Although the aesthetic affect rating of all paintings was similar, judgement latencies for the indeterminate paintings were significantly longer. A surprise memory test revealed that more representational than indeterminate paintings were remembered and that affective strength increased the probability of subsequent recall. Our results suggest that perception and memory of art depend on semantic aspects, whereas, aesthetic affect depends on formal visual features. The longer latencies associated with indeterminate paintings reflect the underlying cognitive processes that mediate object resolution. Indeterminate art works therefore comprise a rich set of stimuli with which the neural correlates of visual perception can be investigated.

  12. Stochastic sensitivity of a bistable energy model for visual perception

    NASA Astrophysics Data System (ADS)

    Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev

    2017-01-01

    Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.

  13. Multimodal indices to Japanese and French prosodically expressed social affects.

    PubMed

    Rilliard, Albert; Shochi, Takaaki; Martin, Jean-Claude; Erickson, Donna; Aubergé, Véronique

    2009-01-01

    Whereas several studies have explored the expression of emotions, little is known on how the visual and audio channels are combined during production of what we call the more controlled social affects, for example, "attitudinal" expressions. This article presents a perception study of the audovisual expression of 12 Japanese and 6 French attitudes in order to understand the contribution of audio and visual modalities for affective communication. The relative importance of each modality in the perceptual decoding of the expressions of four speakers is analyzed as a first step towards a deeper comprehension of their influence on the expression of social affects. Then, the audovisual productions of two speakers (one for each language) are acoustically (F0, duration and intensity) and visually (in terms of Action Units) analyzed, in order to match the relation between objective parameters and listeners' perception of these social affects. The most pertinent objective features, either acoustic or visual, are then discussed, in a bilingual perspective: for example, the relative influence of fundamental frequency for attitudinal expression in both languages is discussed, and the importance of a certain aspect of the voice quality dimension in Japanese is underlined.

  14. Steady-state visual evoked potentials as a research tool in social affective neuroscience

    PubMed Central

    Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas

    2017-01-01

    Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794

  15. Color names, color categories, and color-cued visual search: Sometimes, color perception is not categorical

    PubMed Central

    Brown, Angela M; Lindsey, Delwin T; Guckes, Kevin M

    2011-01-01

    The relation between colors and their names is a classic case-study for investigating the Sapir-Whorf hypothesis that categorical perception is imposed on perception by language. Here, we investigate the Sapir-Whorf prediction that visual search for a green target presented among blue distractors (or vice versa) should be faster than search for a green target presented among distractors of a different color of green (or for a blue target among different blue distractors). Gilbert, Regier, Kay & Ivry (2006) reported that this Sapir-Whorf effect is restricted to the right visual field (RVF), because the major brain language centers are in the left cerebral hemisphere. We found no categorical effect at the Green|Blue color boundary, and no categorical effect restricted to the RVF. Scaling of perceived color differences by Maximum Likelihood Difference Scaling (MLDS) also showed no categorical effect, including no effect specific to the RVF. Two models fit the data: a color difference model based on MLDS and a standard opponent-colors model of color discrimination based on the spectral sensitivities of the cones. Neither of these models, nor any of our data, suggested categorical perception of colors at the Green|Blue boundary, in either visual field. PMID:21980188

  16. Factors affecting patient's perception of anticancer treatments side-effects: an observational study.

    PubMed

    Russo, Stefania; Cinausero, Marika; Gerratana, Lorenzo; Bozza, Claudia; Iacono, Donatella; Driol, Pamela; Deroma, Laura; Sottile, Roberta; Fasola, Gianpiero; Puglisi, Fabio

    2014-02-01

    Analysis of relative importance of side effects of anticancer therapy is extremely useful in the process of clinical decision making. There is evidence that patients' perception of the side effects of anticancer treatments changes over time. Aim of this study was to evaluate the cancer patients' perceptions of physical and non-physical side effects of contemporary anticancer therapy. Four hundred and sixty-four patients entered the study (153 men and 311 women). Participants were asked to rank their side effects in order of distress by using two sets of cards naming physical and non-physical effects, respectively. Influencing factors, including treatment and patient characteristics, were also analysed. Patients ranked the non-physical side effect 'Affects my family or partner' first. 'Constantly tired' and 'Loss of hair' were ranked second and third, respectively. Significant differences from previous studies on this topic emerged. In particular, 'Vomiting', a predominant concern in previous studies, almost disappeared, whereas 'Nausea' and 'Loss of hair' remained important side effects in the patients' perception. Interestingly, marital status was predominant in driving patients' perception, being associated with several side effects ('Constantly tired', 'Loss of appetite', 'Affects my work/Home duties', 'Affects my social activities', 'Infertility'). Other significant factors influencing patient's perception of side effects included age, disease characteristics and ongoing anticancer therapy. This study provided information on current status of patients' perceptions of side effects of anticancer treatment. These results could be used in pre-treatment patient education and counselling.

  17. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    PubMed

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology. (c) 2015 APA, all rights reserved).

  18. On the relationship between personal experience, affect and risk perception: The case of climate change

    PubMed Central

    van der Linden, Sander

    2014-01-01

    Examining the conceptual relationship between personal experience, affect, and risk perception is crucial in improving our understanding of how emotional and cognitive process mechanisms shape public perceptions of climate change. This study is the first to investigate the interrelated nature of these variables by contrasting three prominent social-psychological theories. In the first model, affect is viewed as a fast and associative information processing heuristic that guides perceptions of risk. In the second model, affect is seen as flowing from cognitive appraisals (i.e., affect is thought of as a post-cognitive process). Lastly, a third, dual-process model is advanced that integrates aspects from both theoretical perspectives. Four structural equation models were tested on a national sample (N = 808) of British respondents. Results initially provide support for the “cognitive” model, where personal experience with extreme weather is best conceptualized as a predictor of climate change risk perception and, in turn, risk perception a predictor of affect. Yet, closer examination strongly indicates that at the same time, risk perception and affect reciprocally influence each other in a stable feedback system. It is therefore concluded that both theoretical claims are valid and that a dual-process perspective provides a superior fit to the data. Implications for theory and risk communication are discussed. © 2014 The Authors. European Journal of Social Psychology published by John Wiley & Sons, Ltd. PMID:25678723

  19. Affective associative learning modifies the sensory perception of nociceptive stimuli without participant's awareness.

    PubMed

    Wunsch, Annabel; Philippot, Pierre; Plaghki, Léon

    2003-03-01

    The present experiment examined the possibility to change the sensory and/or the affective perception of thermal stimuli by an emotional associative learning procedure known to operate without participants' awareness (evaluative conditioning). In a mixed design, an aversive conditioning procedure was compared between subjects to an appetitive conditioning procedure. Both groups were also compared within-subject to a control condition (neutral conditioning). The aversive conditioning was induced by associating non-painful and painful thermal stimuli - delivered on the right forearm - with unpleasant slides. The appetitive conditioning consisted in an association between thermal stimuli - also delivered on the right forearm - and pleasant slides. The control condition consisted in an association between thermal stimuli - delivered for all participants on the left forearm - and neutral slides. The effects of the conditioning procedures on the sensory and affective dimensions were evaluated with visual analogue scale (VAS)-intensity and VAS-unpleasantness. Startle reflex was used as a physiological index of emotional valence disposition. Results confirmed that no participants were aware of the conditioning procedure. After unpleasant slides (aversive conditioning), non-painful and painful thermal stimuli were judged more intense and more unpleasant than when preceded by neutral slides (control condition) or pleasant slides (appetitive conditioning). Despite a strong correlation between the intensity and the unpleasantness scales, effects were weaker for the affective scale and, became statistically non-significant when VAS-intensity was used as covariate. This experiment shows that it is possible to modify the perception of intensity of thermal stimuli by a non-conscious learning procedure based on the transfer of the valence of the unconditioned stimuli (pleasant or unpleasant slides) towards the conditioned stimuli (non-painful and painful thermal stimuli). These

  20. Healthy children show gender differences in correlations between nonverbal cognitive ability and brain activation during visual perception.

    PubMed

    Asano, Kohei; Taki, Yasuyuki; Hashizume, Hiroshi; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Takeuchi, Hikaru; Kawashima, Ryuta

    2014-08-08

    Humans perceive textual and nontextual information in visual perception, and both depend on language. In childhood education, students exhibit diverse perceptual abilities, such that some students process textual information better and some process nontextual information better. These predispositions involve many factors, including cognitive ability and learning preference. However, the relationship between verbal and nonverbal cognitive abilities and brain activation during visual perception has not yet been examined in children. We used functional magnetic resonance imaging to examine the relationship between nonverbal and verbal cognitive abilities and brain activation during nontextual visual perception in large numbers of children. A significant positive correlation was found between nonverbal cognitive abilities and brain activation in the right temporoparietal junction, which is thought to be related to attention reorienting. This significant positive correlation existed only in boys. These findings suggested that male brain activation differed from female brain activation, and that this depended on individual cognitive processes, even if there was no gender difference in behavioral performance. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Optical Phonetics and Visual Perception of Lexical and Phrasal Stress in English

    ERIC Educational Resources Information Center

    Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L.; Cho, Taehong; Alwan, Abeer

    2009-01-01

    In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of…

  2. The role of prestimulus activity in visual extinction☆

    PubMed Central

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-01-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398

  3. Comparison of spatiotemporal cortical activation pattern during visual perception of Korean, English, Chinese words: an event-related potential study.

    PubMed

    Kim, Kyung Hwan; Kim, Ja Hyun

    2006-02-20

    The aim of this study was to compare spatiotemporal cortical activation patterns during the visual perception of Korean, English, and Chinese words. The comparison of these three languages offers an opportunity to study the effect of written forms on cortical processing of visually presented words, because of partial similarity/difference among words of these languages, and the familiarity of native Koreans with these three languages at the word level. Single-character words and pictograms were excluded from the stimuli in order to activate neuronal circuitries that are involved only in word perception. Since a variety of cerebral processes are sequentially evoked during visual word perception, a high-temporal resolution is required and thus we utilized event-related potential (ERP) obtained from high-density electroencephalograms. The differences and similarities observed from statistical analyses of ERP amplitudes, the correlation between ERP amplitudes and response times, and the patterns of current source density, appear to be in line with demands of visual and semantic analysis resulting from the characteristics of each language, and the expected task difficulties for native Korean subjects.

  4. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion

    PubMed Central

    Niehorster, Diederick C.

    2017-01-01

    How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing. PMID:28567272

  5. Perception and psychological evaluation for visual and auditory environment based on the correlation mechanisms

    NASA Astrophysics Data System (ADS)

    Fujii, Kenji

    2002-06-01

    In this dissertation, the correlation mechanism in modeling the process in the visual perception is introduced. It has been well described that the correlation mechanism is effective for describing subjective attributes in auditory perception. The main result is that it is possible to apply the correlation mechanism to the process in temporal vision and spatial vision, as well as in audition. (1) The psychophysical experiment was performed on subjective flicker rates for complex waveforms. A remarkable result is that the phenomenon of missing fundamental is found in temporal vision as analogous to the auditory pitch perception. This implies the existence of correlation mechanism in visual system. (2) For spatial vision, the autocorrelation analysis provides useful measures for describing three primary perceptual properties of visual texture: contrast, coarseness, and regularity. Another experiment showed that the degree of regularity is a salient cue for texture preference judgment. (3) In addition, the autocorrelation function (ACF) and inter-aural cross-correlation function (IACF) were applied for analysis of the temporal and spatial properties of environmental noise. It was confirmed that the acoustical properties of aircraft noise and traffic noise are well described. These analyses provided useful parameters extracted from the ACF and IACF in assessing the subjective annoyance for noise. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Junko Atagi, 6813 Mosonou, Saijo-cho, Higashi-Hiroshima 739-0024, Japan. E-mail address: atagi\\@urban.ne.jp.

  6. Professors' Facebook content affects students' perceptions and expectations.

    PubMed

    Sleigh, Merry J; Smith, Aimee W; Laboe, Jason

    2013-07-01

    Abstract Facebook users must make choices about level of self-disclosure, and this self-disclosure can influence perceptions of the profile's author. We examined whether the specific type of self-disclosure on a professor's profile would affect students' perceptions of the professor and expectations of his classroom. We created six Facebook profiles for a fictitious male professor, each with a specific emphasis: politically conservative, politically liberal, religious, family oriented, socially oriented, or professional. Undergraduate students randomly viewed one profile and responded to questions that assessed their perceptions and expectations. The social professor was perceived as less skilled but more popular, while his profile was perceived as inappropriate and entertaining. Students reacted more strongly and negatively to the politically focused profiles in comparison to the religious, family, and professional profiles. Students reported being most interested in professional information on a professor's Facebook profile, yet they reported being least influenced by the professional profile. In general, students expressed neutrality about their interest in finding and friending professors on Facebook. These findings suggest that students have the potential to form perceptions about the classroom environment and about their professors based on the specific details disclosed in professors' Facebook profiles.

  7. Audio-visual speech perception in adult readers with dyslexia: an fMRI study.

    PubMed

    Rüsseler, Jascha; Ye, Zheng; Gerth, Ivonne; Szycik, Gregor R; Münte, Thomas F

    2018-04-01

    Developmental dyslexia is a specific deficit in reading and spelling that often persists into adulthood. In the present study, we used slow event-related fMRI and independent component analysis to identify brain networks involved in perception of audio-visual speech in a group of adult readers with dyslexia (RD) and a group of fluent readers (FR). Participants saw a video of a female speaker saying a disyllabic word. In the congruent condition, audio and video input were identical whereas in the incongruent condition, the two inputs differed. Participants had to respond to occasionally occurring animal names. The independent components analysis (ICA) identified several components that were differently modulated in FR and RD. Two of these components including fusiform gyrus and occipital gyrus showed less activation in RD compared to FR possibly indicating a deficit to extract face information that is needed to integrate auditory and visual information in natural speech perception. A further component centered on the superior temporal sulcus (STS) also exhibited less activation in RD compared to FR. This finding is corroborated in the univariate analysis that shows less activation in STS for RD compared to FR. These findings suggest a general impairment in recruitment of audiovisual processing areas in dyslexia during the perception of natural speech.

  8. Does the Perception that Stress Affects Health Matter? The Association with Health and Mortality

    PubMed Central

    Keller, Abiola; Litzelman, Kristin; Wisk, Lauren E.; Maddox, Torsheika; Cheng, Erika Rose; Creswell, Paul D.; Witt, Whitney P.

    2012-01-01

    Objective This study sought to examine the relationship among the amount of stress, the perception that stress affects health, and health and mortality outcomes in a nationally-representative sample of U.S. adults. Methods Data from the 1998 National Health Interview Survey were linked to prospective National Death Index mortality data through 2006. Separate logistic regression models were used to examine the factors associated with current health status and psychological distress. Cox proportional hazard models were used to determine the impact of perceiving that stress affects health on all-cause mortality. Each model specifically examined the interaction between the amount of stress and the perception that stress affects health, controlling for sociodemographic, health behavior, and access to healthcare factors. Results 33.7% of nearly 186 million (n=28,753) U.S. adults perceived that stress affected their health a lot or to some extent. Both higher levels of reported stress and the perception that stress affects health were independently associated with an increased likelihood of worse health and mental health outcomes. The amount of stress and the perception that stress affects health interacted such that those who reported a lot of stress and that stress impacted their health a lot had a 43% increased risk of premature death (HR = 1.43, 95% CI [1.20, 1.71]). Conclusions High amounts of stress and the perception that stress impacts health are each associated with poor health and mental health. Individuals who perceived that stress affects their health and reported a large amount of stress had an increased risk of premature death. PMID:22201278

  9. Perceptibility curve test for digital radiographs before and after correction for attenuation and correction for attenuation and visual response.

    PubMed

    Li, G; Welander, U; Yoshiura, K; Shi, X-Q; McDavid, W D

    2003-11-01

    Two digital image processing methods, correction for X-ray attenuation and correction for attenuation and visual response, have been developed. The aim of the present study was to compare digital radiographs before and after correction for attenuation and correction for attenuation and visual response by means of a perceptibility curve test. Radiographs were exposed of an aluminium test object containing holes ranging from 0.03 mm to 0.30 mm with increments of 0.03 mm. Fourteen radiographs were exposed with the Dixi system (Planmeca Oy, Helsinki, Finland) and twelve radiographs were exposed with the F1 iOX system (Fimet Oy, Monninkylä, Finland) from low to high exposures covering the full exposure ranges of the systems. Radiographs obtained from the Dixi and F1 iOX systems were 12 bit and 8 bit images, respectively. Original radiographs were then processed for correction for attenuation and correction for attenuation and visual response. Thus, two series of radiographs were created. Ten viewers evaluated all the radiographs in the same random order under the same viewing conditions. The object detail having the lowest perceptible contrast was recorded for each observer. Perceptibility curves were plotted according to the mean of observer data. The perceptibility curves for processed radiographs obtained with the F1 iOX system are higher than those for originals in the exposure range up to the peak, where the curves are basically the same. For radiographs exposed with the Dixi system, perceptibility curves for processed radiographs are higher than those for originals for all exposures. Perceptibility curves show that for 8 bit radiographs obtained from the F1 iOX system, the contrast threshold was increased in processed radiographs up to the peak, while for 12 bit radiographs obtained with the Dixi system, the contrast threshold was increased in processed radiographs for all exposures. When comparisons were made between radiographs corrected for attenuation and

  10. Visual dot interaction with short-term memory.

    PubMed

    Etindele Sosso, Faustin Armel

    2017-06-01

    Many neurodegenerative diseases have a memory component. Brain structures related to memory are affected by environmental stimuli, and it is difficult to dissociate effects of all behavior of neurons. Here, visual cortex of mice was stimulated with gratings and dot, and an observation of neuronal activity before and after was made. Bandwidth, firing rate and orientation selectivity index were evaluated. A primary communication between primary visual cortex and short-term memory appeared to show an interesting path to train cognitive circuitry and investigate the basics mechanisms of the neuronal learning. The findings also suggested the interplay between primary visual cortex and short-term plasticity. The properties inside a visual target shape the perception and affect the basic encoding. Using visual cortex, it may be possible to train the memory and improve the recovery of people with cognitive disabilities or memory deficit.

  11. Affective and Cognitive Orientations in Intergroup Perception

    PubMed Central

    Wolf, Lukas J.; von Hecker, Ulrich; Maio, Gregory R.

    2017-01-01

    Three studies examined the role of need for affect (NFA) and need for cognition (NFC) in intergroup perception. We hypothesized that NFA predicts a preference for stereotypically warm groups over stereotypically cold groups, whereas NFC predicts a preference for stereotypically competent groups over stereotypically incompetent groups. Study 1 supported these hypotheses for attitudes toward stereotypically ambivalent groups, which are stereotyped as high on one of the trait dimensions (e.g., high warmth) and low on the other (e.g., low competence), but not for stereotypically univalent groups, which are seen as high or low on both dimensions. Studies 2 and 3 replicated this pattern for stereotypically ambivalent groups, and yielded provocative evidence regarding several putative mechanisms underlying these associations. Together, these findings help integrate and extend past evidence on attitude-relevant individual differences with research on intergroup perception. PMID:28903673

  12. Location perception: the X-Files parable.

    PubMed

    Prinzmetal, William

    2005-01-01

    Three aspects of visual object location were investigated: (1) how the visual system integrates information for locating objects, (2) how attention operates to affect location perception, and (3) how the visual system deals with locating an object when multiple objects are present. The theories were described in terms of a parable (the X-Files parable). Then, computer simulations were developed. Finally, predictions derived from the simulations were tested. In the scenario described in the parable, we ask how a system of detectors might locate an alien spaceship, how attention might be implemented in such a spaceship detection system, and how the presence of one spaceship might influence the location perception of another alien spaceship. Experiment 1 demonstrated that location information is integrated with a spatial average rule. In Experiment 2, this rule was applied to a more-samples theory of attention. Experiment 3 demonstrated how the integration rule could account for various visual illusions.

  13. Conscious visual memory with minimal attention.

    PubMed

    Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F

    2017-02-01

    Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Cultural Differences in Affect Intensity Perception in the Context of Advertising

    PubMed Central

    Pogosyan, Marianna; Engelmann, Jan B.

    2011-01-01

    Cultural differences in the perception of positive affect intensity within an advertising context were investigated among American, Japanese, and Russian participants. Participants were asked to rate the intensity of facial expressions of positive emotions, which displayed either subtle, low intensity, or salient, high intensity expressions of positive affect. In agreement with previous findings from cross-cultural psychological research, current results demonstrate both cross-cultural agreement and differences in the perception of positive affect intensity across the three cultures. Specifically, American participants perceived high arousal (HA) images as significantly less calm than participants from the other two cultures, while the Japanese participants perceived low arousal (LA) images as significantly more excited than participants from the other cultures. The underlying mechanisms of these cultural differences were further investigated through difference scores that probed for cultural differences in perception and categorization of positive emotions. Findings indicate that rating differences are due to (1) perceptual differences in the extent to which HA images were discriminated from LA images, and (2) categorization differences in the extent to which facial expressions were grouped into affect intensity categories. Specifically, American participants revealed significantly higher perceptual differentiation between arousal levels of facial expressions in high and intermediate intensity categories. Japanese participants, on the other hand, did not discriminate between high and low arousal affect categories to the same extent as did the American and Russian participants. These findings indicate the presence of cultural differences in underlying decoding mechanisms of facial expressions of positive affect intensity. Implications of these results for global advertising are discussed. PMID:22084635

  15. Cultural differences in affect intensity perception in the context of advertising.

    PubMed

    Pogosyan, Marianna; Engelmann, Jan B

    2011-01-01

    Cultural differences in the perception of positive affect intensity within an advertising context were investigated among American, Japanese, and Russian participants. Participants were asked to rate the intensity of facial expressions of positive emotions, which displayed either subtle, low intensity, or salient, high intensity expressions of positive affect. In agreement with previous findings from cross-cultural psychological research, current results demonstrate both cross-cultural agreement and differences in the perception of positive affect intensity across the three cultures. Specifically, American participants perceived high arousal (HA) images as significantly less calm than participants from the other two cultures, while the Japanese participants perceived low arousal (LA) images as significantly more excited than participants from the other cultures. The underlying mechanisms of these cultural differences were further investigated through difference scores that probed for cultural differences in perception and categorization of positive emotions. Findings indicate that rating differences are due to (1) perceptual differences in the extent to which HA images were discriminated from LA images, and (2) categorization differences in the extent to which facial expressions were grouped into affect intensity categories. Specifically, American participants revealed significantly higher perceptual differentiation between arousal levels of facial expressions in high and intermediate intensity categories. Japanese participants, on the other hand, did not discriminate between high and low arousal affect categories to the same extent as did the American and Russian participants. These findings indicate the presence of cultural differences in underlying decoding mechanisms of facial expressions of positive affect intensity. Implications of these results for global advertising are discussed.

  16. Transcranial electrical stimulation of the occipital cortex during visual perception modifies the magnitude of BOLD activity: A combined tES-fMRI approach.

    PubMed

    Alekseichuk, Ivan; Diers, Kersten; Paulus, Walter; Antal, Andrea

    2016-10-15

    The aim of this study was to investigate if the blood oxygenation level-dependent (BOLD) changes in the visual cortex can be used as biomarkers reflecting the online and offline effects of transcranial electrical stimulation (tES). Anodal transcranial direct current stimulation (tDCS) and 10Hz transcranial alternating current stimulation (tACS) were applied for 10min duration over the occipital cortex of healthy adults during the presentation of different visual stimuli, using a crossover, double-blinded design. Control experiments were also performed, in which sham stimulation as well as another electrode montage were used. Anodal tDCS over the visual cortex induced a small but significant further increase in BOLD response evoked by a visual stimulus; however, no aftereffect was observed. Ten hertz of tACS did not result in an online effect, but in a widespread offline BOLD decrease over the occipital, temporal, and frontal areas. These findings demonstrate that tES during visual perception affects the neuronal metabolism, which can be detected with functional magnetic resonance imaging (fMRI). Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Borderline personality disorder symptoms and affective responding to perceptions of rejection and acceptance from romantic versus nonromantic partners.

    PubMed

    Lazarus, Sophie A; Scott, Lori N; Beeney, Joseph E; Wright, Aidan G C; Stepp, Stephanie D; Pilkonis, Paul A

    2018-05-01

    We examined event-contingent recording of daily interpersonal interactions in a diagnostically diverse sample of 101 psychiatric outpatients who were involved in a romantic relationship. We tested whether the unique effect of borderline personality disorder (BPD) symptoms on affective responses (i.e., hostility, sadness, guilt, fear, and positive affect) to perceptions of rejection or acceptance differed with one's romantic partner compared with nonromantic partners. BPD symptoms were associated with more frequent perceptions of rejection and less frequent perceptions of acceptance across the study. For all participants, perceptions of rejecting behavior were associated with higher within-person negative affect and lower within-person positive affect. As predicted, in interactions with romantic partners only, those with high BPD symptoms reported heightened hostility and, to a lesser extent, attenuated sadness in response to perceptions of rejection. BPD symptoms did not moderate associations between perceptions of rejection and guilt, fear, or positive affect across romantic and nonromantic partners. For all participants, perceived acceptance was associated with lower within-person negative affect and higher within-person positive affect. However, BPD symptoms were associated with attenuated positive affect in response to perceptions of accepting behavior in interactions with romantic partners only. BPD symptoms did not moderate associations between perceptions of acceptance and any of the negative affects across romantic and nonromantic partners. This study highlights the specificity of affective responses characteristic of BPD when comparisons are made with patients with other personality and psychiatric disorders. Implications for romantic relationship dysfunction are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Early Binocular Input Is Critical for Development of Audiovisual but Not Visuotactile Simultaneity Perception.

    PubMed

    Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne

    2017-02-20

    Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Does the perception that stress affects health matter? The association with health and mortality.

    PubMed

    Keller, Abiola; Litzelman, Kristin; Wisk, Lauren E; Maddox, Torsheika; Cheng, Erika Rose; Creswell, Paul D; Witt, Whitney P

    2012-09-01

    This study sought to examine the relationship among the amount of stress, the perception that stress affects health, and health and mortality outcomes in a nationally representative sample of U.S. adults. Data from the 1998 National Health Interview Survey were linked to prospective National Death Index mortality data through 2006. Separate logistic regression models were used to examine the factors associated with current health status and psychological distress. Cox proportional hazard models were used to determine the impact of perceiving that stress affects health on all-cause mortality. Each model specifically examined the interaction between the amount of stress and the perception that stress affects health, controlling for sociodemographic, health behavior, and access to health care factors. 33.7% of nearly 186 million (unweighted n = 28,753) U.S. adults perceived that stress affected their health a lot or to some extent. Both higher levels of reported stress and the perception that stress affects health were independently associated with an increased likelihood of worse health and mental health outcomes. The amount of stress and the perception that stress affects health interacted such that those who reported a lot of stress and that stress impacted their health a lot had a 43% increased risk of premature death (HR = 1.43, 95% CI [1.2, 1.7]). High amounts of stress and the perception that stress impacts health are each associated with poor health and mental health. Individuals who perceived that stress affects their health and reported a large amount of stress had an increased risk of premature death. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  20. Using Japanese Onomatopoeias to Explore Perceptual Dimensions in Visual Material Perception.

    PubMed

    Hanada, Mitsuhiko

    2016-01-28

    This study examined the perceptual dimensions of visual material properties. Photographs of 50 objects were presented to the participants, and they reported a suitable onomatopoeia (mimetic word) for describing the material of the object in each photograph, based on visual appearance. The participants' responses were collated into a contingency table of photographs × onomatopoeias. After removing some items from the table, correspondence analysis was applied to the contingency table, and a six-dimensional biplot was obtained. By rotating the axes to maximize sparseness of the coordinates for the items in the biplot, three meaningful perceptual dimensions were derived: wetness/stickiness, fluffiness/softness, and smoothness-roughness/gloss-dullness. Two additional possible dimensions were obtained: crumbliness and coldness. These dimensions, except gloss-dullness, were paid little attention to in vision science, though they were suggested as perceptual dimensions of tactile texture. This suggests that the perceptual dimensions that are considered to be primarily related to haptics are also important in visual material perception. © The Author(s) 2016.

  1. Self-estimation of physical ability in stepping over an obstacle is not mediated by visual height perception: a comparison between young and older adults.

    PubMed

    Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Yasunaga, Masashi; Ogawa, Susumu; Suzuki, Hiroyuki; Imanaka, Kuniyasu

    2017-07-01

    Older adults tend to overestimate their step-over ability. However, it is unclear as to whether this is caused by inaccurate self-estimation of physical ability or inaccurate perception of height. We, therefore, measured both visual height perception ability and self-estimation of step-over ability among young and older adults. Forty-seven older and 16 young adults performed a height perception test (HPT) and a step-over test (SOT). Participants visually judged the height of vertical bars from distances of 7 and 1 m away in the HPT, then self-estimated and, subsequently, actually performed a step-over action in the SOT. The results showed no significant difference between young and older adults in visual height perception. In the SOT, young adults tended to underestimate their step-over ability, whereas older adults either overestimated their abilities or underestimated them to a lesser extent than did the young adults. Moreover, visual height perception was not correlated with the self-estimation of step-over ability in both young and older adults. These results suggest that the self-overestimation of step-over ability which appeared in some healthy older adults may not be caused by the nature of visual height perception, but by other factor(s), such as the likely age-related nature of self-estimation of physical ability, per se.

  2. Visual perception of axes of head rotation

    PubMed Central

    Arnoldussen, D. M.; Goossens, J.; van den Berg, A. V.

    2013-01-01

    Registration of ego-motion is important to accurately navigate through space. Movements of the head and eye relative to space are registered through the vestibular system and optical flow, respectively. Here, we address three questions concerning the visual registration of self-rotation. (1) Eye-in-head movements provide a link between the motion signals received by sensors in the moving eye and sensors in the moving head. How are these signals combined into an ego-rotation percept? We combined optic flow of simulated forward and rotational motion of the eye with different levels of eye-in-head rotation for a stationary head. We dissociated simulated gaze rotation and head rotation by different levels of eye-in-head pursuit. We found that perceived rotation matches simulated head- not gaze-rotation. This rejects a model for perceived self-rotation that relies on the rotation of the gaze line. Rather, eye-in-head signals serve to transform the optic flow's rotation information, that specifies rotation of the scene relative to the eye, into a rotation relative to the head. This suggests that transformed visual self-rotation signals may combine with vestibular signals. (2) Do transformed visual self-rotation signals reflect the arrangement of the semi-circular canals (SCC)? Previously, we found sub-regions within MST and V6+ that respond to the speed of the simulated head rotation. Here, we re-analyzed those Blood oxygenated level-dependent (BOLD) signals for the presence of a spatial dissociation related to the axes of visually simulated head rotation, such as have been found in sub-cortical regions of various animals. Contrary, we found a rather uniform BOLD response to simulated rotation along the three SCC axes. (3) We investigated if subject's sensitivity to the direction of the head rotation axis shows SCC axes specifcity. We found that sensitivity to head rotation is rather uniformly distributed, suggesting that in human cortex, visuo-vestibular integration is

  3. The role of prestimulus activity in visual extinction.

    PubMed

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-07-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Visual variability affects early verb learning.

    PubMed

    Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S

    2014-09-01

    Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.

  5. Distortions of Subjective Time Perception Within and Across Senses

    PubMed Central

    van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan

    2008-01-01

    Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248

  6. Intermodal Perception of Affect Expressions by Infants.

    ERIC Educational Resources Information Center

    Walker, Arlene

    Four experiments (E1, E2, E3 and E4) investigated whether or not 5- to 7-month-old infants could detect auditory-visual relationships in audiovisual presentations of affective expressions, thereby perceiving the bimodally-presented expressions as unitary events. In E1, 16 infants were simultaneously shown two 2-minute films of a "happy"…

  7. Visual adaptation of the perception of "life": animacy is a basic perceptual dimension of faces.

    PubMed

    Koldewyn, Kami; Hanus, Patricia; Balas, Benjamin

    2014-08-01

    One critical component of understanding another's mind is the perception of "life" in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species specific but not constrained by age categories.

  8. Contributions of Invariants, Heuristics, and Exemplars to the Visual Perception of Relative Mass

    ERIC Educational Resources Information Center

    Cohen, Andrew L.

    2006-01-01

    Some potential contributions of invariants, heuristics, and exemplars to the perception of dynamic properties in the colliding balls task were explored. On each trial, an observer is asked to determine the heavier of 2 colliding balls. The invariant approach assumes that people can learn to detect complex visual patterns that reliably specify…

  9. The influence of visual and tactile perception on hand control in children with Duchenne muscular dystrophy.

    PubMed

    Troise, Denise; Yoneyama, Simone; Resende, Maria Bernadette; Reed, Umbertina; Xavier, Gilberto Fernando; Hasue, Renata

    2014-09-01

    To investigate tactile perception and manual dexterity, with or without visual feedback, in males with Duchenne muscular dystrophy (DMD). Forty males with DMD (mean age 9 y 8 mo, SD 2 y 3 mo; range 5-14 y), recruited from the teaching hospital of the School of Medicine of the University of São Paulo, with disease severity graded as '1' to '6' on the Vignos Scale and '1' on Brooke's Scale, and 49 healthy males (mean age 8 y 2 mo; range 5-11 y; SD 1 y 11 mo), recruited from a local education center, participated in the study. We assessed tactile perception using two-point discrimination and stereognosis tests, and manual dexterity using the Pick-Up test with the eyes either open or closed. Analysis of variance was used to compare groups; a p value of less than 0.05 was considered statistically significant. Males with DMD exhibited no impairment in tactile perception, as measured by the two-point discrimination test and the number of objects correctly named in the stereognosis test. Manipulation during stereognosis was statistically slower with both hands (p<0.001), and manual dexterity was much worse in males with DMD when there was no visual feedback (p<0.001). Males with DMD exhibited disturbances in manipulation during stereognosis and dexterity tests. Hand control was highly dependent on visual information rather than on tactile perception. Motor dysfunction in males with DMD, therefore, might be related to altered neural control. © 2014 Mac Keith Press.

  10. The contribution of visual and proprioceptive information to the perception of leaning in a dynamic motorcycle simulator.

    PubMed

    Lobjois, Régis; Dagonneau, Virginie; Isableu, Brice

    2016-11-01

    Compared with driving or flight simulation, little is known about self-motion perception in riding simulation. The goal of this study was to examine whether or not continuous roll motion supports the sensation of leaning into bends in dynamic motorcycle simulation. To this end, riders were able to freely tune the visual scene and/or motorcycle simulator roll angle to find a pattern that matched their prior knowledge. Our results revealed idiosyncrasy in the combination of visual and proprioceptive information. Some subjects relied more on the visual dimension, but reported increased sickness symptoms with the visual roll angle. Others relied more on proprioceptive information, tuning the direction of the visual scenery to match three possible patterns. Our findings also showed that these two subgroups tuned the motorcycle simulator roll angle in a similar way. This suggests that sustained inertially specified roll motion have contributed to the sensation of leaning in spite of the occurrence of unexpected gravito-inertial stimulation during the tilt. Several hypotheses are discussed. Practitioner Summary: Self-motion perception in motorcycle simulation is a relatively new research area. We examined how participants combined visual and proprioceptive information. Findings revealed individual differences in the visual dimension. However, participants tuned the simulator roll angle similarly, supporting the hypothesis that sustained inertially specified roll motion contributes to a leaning sensation.

  11. When writing impairs reading: letter perception's susceptibility to motor interference.

    PubMed

    James, Karin H; Gauthier, Isabel

    2009-08-01

    The effect of writing on the concurrent visual perception of letters was investigated in a series of studies using an interference paradigm. Participants drew shapes and letters while simultaneously visually identifying letters and shapes embedded in noise. Experiments 1-3 demonstrated that letter perception, but not the perception of shapes, was affected by motor interference. This suggests a strong link between the perception of letters and the neural substrates engaged during writing. The overlap both in category (letter vs. shape) and in the perceptual similarity of the features (straight vs. curvy) of the seen and drawn items determined the amount of interference. Experiment 4 demonstrated that intentional production of letters is not necessary for the interference to occur, because passive movement of the hand in the shape of letters also interfered with letter perception. When passive movements were used, however, only the category of the drawn items (letters vs. shapes), but not the perceptual similarity, had an influence, suggesting that motor representations for letters may selectively influence visual perception of letters through proprioceptive feedback, with an additional influence of perceptual similarity that depends on motor programs.

  12. Gait adaptability training is affected by visual dependency.

    PubMed

    Brady, Rachel A; Peters, Brian T; Batson, Crystal D; Ploutz-Snyder, Robert; Mulavara, Ajitkumar P; Bloomberg, Jacob J

    2012-07-01

    As part of a larger gait adaptability training study, we designed a program that presented combinations of visual flow and support-surface manipulations to investigate the response of healthy adults to walking on a treadmill in novel discordant sensorimotor conditions. A visual dependence score was determined for each subject, and this score was used to explore how visual dependency was linked to locomotor performance (1) during three training sessions and (2) in a new discordant environment presented at the conclusion of training. Performance measures included reaction time (RT), stride frequency (SF), and heart rate (HR), which respectively served as indicators of cognitive load, postural stability, and anxiety. We hypothesized that training would affect performance measures differently for highly visually dependent individuals than for their less visually dependent counterparts. A seemingly unrelated estimation analysis of RT, SF, and HR revealed a significant omnibus interaction of visual dependency by session (p < 0.001), suggesting that the magnitude of differences in these measures across training day 1 (TD1), training day 3 (TD3), and exposure to a novel test is dependent on subjects' levels of visual dependency. The RT result, in particular, suggested that highly visually dependent subjects successfully trained to one set of sensory discordant conditions but were unable to apply their adapted skills when introduced to a new sensory discordant environment. This finding augments rationale for developing customized gait training programs that are tailored to an individual. It highlights one factor--personal level of visual dependency--to consider when designing training conditions for a subject or patient. Finally, the link between visual dependency and locomotor performance may offer predictive insight regarding which subjects in a normal population will require more training when preparing for specific novel locomotor conditions.

  13. Effect of drivers' age and push button locations on visual time off road, steering wheel deviation and safety perception.

    PubMed

    Dukic, T; Hanson, L; Falkmer, T

    2006-01-15

    The study examined the effects of manual control locations on two groups of randomly selected young and old drivers in relation to visual time off road, steering wheel deviation and safety perception. Measures of visual time off road, steering wheel deviations and safety perception were performed with young and old drivers during real traffic. The results showed an effect of both driver's age and button location on the dependent variables. Older drivers spent longer visual time off road when pushing the buttons and had larger steering wheel deviations. Moreover, the greater the eccentricity between the normal line of sight and the button locations, the longer the visual time off road and the larger the steering wheel deviations. No interaction effect between button location and age was found with regard to visual time off road. Button location had an effect on perceived safety: the further away from the normal line of sight the lower the rating.

  14. Chromatic and achromatic visual fields in relation to choroidal thickness in patients with high myopia: A pilot study.

    PubMed

    García-Domene, M C; Luque, M J; Díez-Ajenjo, M A; Desco-Esteban, M C; Artigas, J M

    2018-02-01

    To analyse the relationship between the choroidal thickness and the visual perception of patients with high myopia but without retinal damage. All patients underwent ophthalmic evaluation including a slit lamp examination and dilated ophthalmoscopy, subjective refraction, best corrected visual acuity, axial length, optical coherence tomography, contrast sensitivity function and sensitivity of the visual pathways. We included eleven eyes of subjects with high myopia. There are statistical correlations between choroidal thickness and almost all the contrast sensitivity values. The sensitivity of magnocellular and koniocellular pathways is the most affected, and the homogeneity of the sensibility of the magnocellular pathway depends on the choroidal thickness; when the thickness decreases, the sensitivity impairment extends from the center to the periphery of the visual field. Patients with high myopia without any fundus changes have visual impairments. We have found that choroidal thickness correlates with perceptual parameters such as contrast sensitivity or mean defect and pattern standard deviation of the visual fields of some visual pathways. Our study shows that the magnocellular and koniocellular pathways are the most affected, so that these patients have impairment in motion perception and blue-yellow contrast perception. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  15. Physical and Visual Accessibilities in Intensive Care Units: A Comparative Study of Open-Plan and Racetrack Units.

    PubMed

    Rashid, Mahbub; Khan, Nayma; Jones, Belinda

    2016-01-01

    This study compared physical and visual accessibilities and their associations with staff perception and interaction behaviors in 2 intensive care units (ICUs) with open-plan and racetrack layouts. For the study, physical and visual accessibilities were measured using the spatial analysis techniques of Space Syntax. Data on staff perception were collected from 81 clinicians using a questionnaire survey. The locations of 2233 interactions, and the location and length of another 339 interactions in these units were collected using systematic field observation techniques. According to the study, physical and visual accessibilities were different in the 2 ICUs, and clinicians' primary workspaces were physically and visually more accessible in the open-plan ICU. Physical and visual accessibilities affected how well clinicians' knew their peers and where their peers were located in these units. Physical and visual accessibilities also affected clinicians' perception of interaction and communication and of teamwork and collaboration in these units. Additionally, physical and visual accessibilities showed significant positive associations with interaction behaviors in these units, with the open-plan ICU showing stronger associations. However, physical accessibilities were less important than visual accessibilities in relation to interaction behaviors in these ICUs. The implications of these findings for ICU design are discussed.

  16. Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.

    PubMed

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F

    2003-04-15

    When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.

  17. Visual-vestibular integration motion perception reporting

    NASA Technical Reports Server (NTRS)

    Harm, Deborah L.; Reschke, Millard R.; Parker, Donald E.

    1999-01-01

    Self-orientation and self/surround-motion perception derive from a multimodal sensory process that integrates information from the eyes, vestibular apparatus, proprioceptive and somatosensory receptors. Results from short and long duration spaceflight investigations indicate that: (1) perceptual and sensorimotor function was disrupted during the initial exposure to microgravity and gradually improved over hours to days (individuals adapt), (2) the presence and/or absence of information from different sensory modalities differentially affected the perception of orientation, self-motion and surround-motion, (3) perceptual and sensorimotor function was initially disrupted upon return to Earth-normal gravity and gradually recovered to preflight levels (individuals readapt), and (4) the longer the exposure to microgravity, the more complete the adaptation, the more profound the postflight disturbances, and the longer the recovery period to preflight levels. While much has been learned about perceptual and sensorimotor reactions and adaptation to microgravity, there is much remaining to be learned about the mechanisms underlying the adaptive changes, and about how intersensory interactions affect perceptual and sensorimotor function during voluntary movements. During space flight, SMS and perceptual disturbances have led to reductions in performance efficiency and sense of well-being. During entry and immediately after landing, such disturbances could have a serious impact on the ability of the commander to land the Orbiter and on the ability of all crew members to egress from the Orbiter, particularly in a non-nominal condition or following extended stays in microgravity. An understanding of spatial orientation and motion perception is essential for developing countermeasures for Space Motion Sickness (SMS) and perceptual disturbances during spaceflight and upon return to Earth. Countermeasures for optimal performance in flight and a successful return to Earth require

  18. Fluctuations of visual awareness: Combining motion-induced blindness with binocular rivalry

    PubMed Central

    Jaworska, Katarzyna; Lages, Martin

    2014-01-01

    Binocular rivalry (BR) and motion-induced blindness (MIB) are two phenomena of visual awareness where perception alternates between multiple states despite constant retinal input. Both phenomena have been extensively studied, but the underlying processing remains unclear. It has been suggested that BR and MIB involve the same neural mechanism, but how the two phenomena compete for visual awareness in the same stimulus has not been systematically investigated. Here we introduce BR in a dichoptic stimulus display that can also elicit MIB and examine fluctuations of visual awareness over the course of each trial. Exploiting this paradigm we manipulated stimulus characteristics that are known to influence MIB and BR. In two experiments we found that effects on multistable percepts were incompatible with the idea of a common oscillator. The results suggest instead that local and global stimulus attributes can affect the dynamics of each percept differently. We conclude that the two phenomena of visual awareness share basic temporal characteristics but are most likely influenced by processing at different stages within the visual system. PMID:25240063

  19. A review of visual perception mechanisms that regulate rapid adaptive camouflage in cuttlefish.

    PubMed

    Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T

    2015-09-01

    We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.

  20. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  1. Making memories: the development of long-term visual knowledge in children with visual agnosia.

    PubMed

    Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo

    2013-01-01

    There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2  years and 3.7  years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.

  2. Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia

    PubMed Central

    Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo

    2013-01-01

    There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment. PMID:24319599

  3. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions.

    PubMed

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H; Oğmen, Haluk

    2008-07-15

    The 1990s, the "decade of the brain," witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this "steady-state approach," more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness.

  4. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.

    PubMed

    Zhou, Yang; Wu, Dewei

    2016-01-01

    It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).

  5. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure

    PubMed Central

    2016-01-01

    It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859

  6. Developmental study of visual perception of handwriting movement: influence of motor competencies?

    PubMed

    Bidet-Ildei, Christel; Orliaguet, Jean-Pierre

    2008-07-25

    This paper investigates the influence of motor competencies for the visual perception of human movements in 6-10 years old children. To this end, we compared the kinematics of actual performed and perceptual preferred handwriting movements. The two children's tasks were (1) to write the letter e on a digitizer (handwriting task) and (2) to adjust the velocity of an e displayed on a screen so that it would correspond to "their preferred velocity" (perceptive task). In both tasks, the size of the letter (from 3.4 to 54.02 cm) was different on each trial. Results showed that irrespective of age and task, total movement time conforms to the isochrony principle, i.e., the tendency to maintain constant the duration of movement across changes of amplitude. However, concerning movement speed, there is no developmental correspondence between results obtained in the motor and the perceptive tasks. In handwriting task, movement time decreased with age but no effect of age was observed in the perceptive task. Therefore, perceptual preference of handwriting movement in children could not be strictly interpreted in terms of motor-perceptual coupling.

  7. Visual motion disambiguation by a subliminal sound.

    PubMed

    Dufour, Andre; Touzalin, Pascale; Moessinger, Michèle; Brochard, Renaud; Després, Olivier

    2008-09-01

    There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.

  8. [Factors affecting the quality of cardiopulmonary resuscitation in inpatient units: perception of nurses].

    PubMed

    Citolino, Clairton Marcos Filho; Santos, Eduesley Santana; Silva, Rita de Cassia Gengo E; Nogueira, Lilia de Souza

    2015-12-01

    To identify, in the perception of nurses, the factors that affect the quality of cardiopulmonary resuscitation (CPR) in adult inpatient units, and investigate the influence of both work shifts and professional experience length of time in the perception of these factors. A descriptive, exploratory study conducted at a hospital specialized in cardiology and pneumology with the application of a questionnaire to 49 nurses working in inpatient units. The majority of nurses reported that the high number of professionals in the scenario (75.5%), the lack of harmony (77.6%) or stress of any member of staff (67.3%), lack of material and/or equipment failure (57.1%), lack of familiarity with the emergency trolleys (98.0%) and presence of family members at the beginning of the cardiopulmonary arrest assistance (57.1%) are factors that adversely affect the quality of care provided during CPR. Professional experience length of time and the shift of nurses did not influence the perception of these factors. The identification of factors that affect the quality of CPR in the perception of nurses serves as parameter to implement improvements and training of the staff working in inpatient units.

  9. Categorical Perception of Colour in the Left and Right Visual Field Is Verbally Mediated: Evidence from Korean

    ERIC Educational Resources Information Center

    Roberson, Debi; Pak, Hyensou; Hanley, J. Richard

    2008-01-01

    In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is…

  10. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  11. Altered visual perception in long-term ecstasy (MDMA) users.

    PubMed

    White, Claire; Brown, John; Edwards, Mark

    2013-09-01

    The present study investigated the long-term consequences of ecstasy use on visual processes thought to reflect serotonergic functions in the occipital lobe. Evidence indicates that the main psychoactive ingredient in ecstasy (methylendioxymethamphetamine) causes long-term changes to the serotonin system in human users. Previous research has found that amphetamine-abstinent ecstasy users have disrupted visual processing in the occipital lobe which relies on serotonin, with researchers concluding that ecstasy broadens orientation tuning bandwidths. However, other processes may have accounted for these results. The aim of the present research was to determine if amphetamine-abstinent ecstasy users have changes in occipital lobe functioning, as revealed by two studies: a masking study that directly measured the width of orientation tuning bandwidths and a contour integration task that measured the strength of long-range connections in the visual cortex of drug users compared to controls. Participants were compared on the width of orientation tuning bandwidths (26 controls, 12 ecstasy users, 10 ecstasy + amphetamine users) and the strength of long-range connections (38 controls, 15 ecstasy user, 12 ecstasy + amphetamine users) in the occipital lobe. Amphetamine-abstinent ecstasy users had significantly broader orientation tuning bandwidths than controls and significantly lower contour detection thresholds (CDTs), indicating worse performance on the task, than both controls and ecstasy + amphetamine users. These results extend on previous research, which is consistent with the proposal that ecstasy may damage the serotonin system, resulting in behavioral changes on tests of visual perception processes which are thought to reflect serotonergic functions in the occipital lobe.

  12. Non-auditory factors affecting urban soundscape evaluation.

    PubMed

    Jeon, Jin Yong; Lee, Pyoung Jik; Hong, Joo Young; Cabrera, Densil

    2011-12-01

    The aim of this study is to characterize urban spaces, which combine landscape, acoustics, and lighting, and to investigate people's perceptions of urban soundscapes through quantitative and qualitative analyses. A general questionnaire survey and soundwalk were performed to investigate soundscape perception in urban spaces. Non-auditory factors (visual image, day lighting, and olfactory perceptions), as well as acoustic comfort, were selected as the main contexts that affect soundscape perception, and context preferences and overall impressions were evaluated using an 11-point numerical scale. For qualitative analysis, a semantic differential test was performed in the form of a social survey, and subjects were also asked to describe their impressions during a soundwalk. The results showed that urban soundscapes can be characterized by soundmarks, and soundscape perceptions are dominated by acoustic comfort, visual images, and day lighting, whereas reverberance in urban spaces does not yield consistent preference judgments. It is posited that the subjective evaluation of reverberance can be replaced by physical measurements. The categories extracted from the qualitative analysis revealed that spatial impressions such as openness and density emerged as some of the contexts of soundscape perception. © 2011 Acoustical Society of America

  13. Perception of Words and Non-Words in the Upper and Lower Visual Fields

    ERIC Educational Resources Information Center

    Darker, Iain T.; Jordan, Timothy R.

    2004-01-01

    The findings of previous investigations into word perception in the upper and the lower visual field (VF) are variable and may have incurred non-perceptual biases caused by the asymmetric distribution of information within a word, an advantage for saccadic eye-movements to targets in the upper VF and the possibility that stimuli were not projected…

  14. Lesions to right posterior parietal cortex impair visual depth perception from disparity but not motion cues

    PubMed Central

    Leopold, David A.; Humphreys, Glyn W.; Welchman, Andrew E.

    2016-01-01

    The posterior parietal cortex (PPC) is understood to be active when observers perceive three-dimensional (3D) structure. However, it is not clear how central this activity is in the construction of 3D spatial representations. Here, we examine whether PPC is essential for two aspects of visual depth perception by testing patients with lesions affecting this region. First, we measured subjects' ability to discriminate depth structure in various 3D surfaces and objects using binocular disparity. Patients with lesions to right PPC (N = 3) exhibited marked perceptual deficits on these tasks, whereas those with left hemisphere lesions (N = 2) were able to reliably discriminate depth as accurately as control subjects. Second, we presented an ambiguous 3D stimulus defined by structure from motion to determine whether PPC lesions influence the rate of bistable perceptual alternations. Patients' percept durations for the 3D stimulus were generally within a normal range, although the two patients with bilateral PPC lesions showed the fastest perceptual alternation rates in our sample. Intermittent stimulus presentation reduced the reversal rate similarly across subjects. Together, the results suggest that PPC plays a causal role in both inferring and maintaining the perception of 3D structure with stereopsis supported primarily by the right hemisphere, but do not lend support to the view that PPC is a critical contributor to bistable perceptual alternations. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269606

  15. Optical Associative Processors For Visual Perception"

    NASA Astrophysics Data System (ADS)

    Casasent, David; Telfer, Brian

    1988-05-01

    We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

  16. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  17. The affect heuristic, mortality salience, and risk: domain-specific effects of a natural disaster on risk-benefit perception.

    PubMed

    Västfjäll, Daniel; Peters, Ellen; Slovic, Paul

    2014-12-01

    We examine how affect and accessible thoughts following a major natural disaster influence everyday risk perception. A survey was conducted in the months following the 2004 south Asian Tsunami in a representative sample of the Swedish population (N = 733). Respondents rated their experienced affect as well as the perceived risk and benefits of various everyday decision domains. Affect influenced risk and benefit perception in a way that could be predicted from both the affect-congruency and affect heuristic literatures (increased risk perception and stronger risk-benefit correlations). However, in some decision domains, self-regulation goals primed by the natural disaster predicted risk and benefit ratings. Together, these results show that affect, accessible thoughts and motivational states influence perceptions of risks and benefits. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  18. Sociodemographic and lifestyle factors affecting the self-perception period of lower urinary tract symptoms of international prostate symptom score items.

    PubMed

    Kim, J H; Shim, S R; Lee, W J; Kim, H J; Kwon, S-S; Bae, J H

    2012-12-01

    This study investigated the influence of sociodemographic and lifestyle factors on the lower urinary tract symptom (LUTS) self-perception period and International Prostate Symptom Score. This cross-sectional study examined 209 men aged ≥ 40 years with non-treated LUTS who participated in a prostate examination survey. Questions included International Prostate Symptom Score (IPSS) items with self-perception periods for each item. Sociodemographic and lifestyle factors were also assessed. Participants were divided by mild LUTS (IPSS less than 8) and moderate-to-severe LUTS (IPSS 8 or higher). Self-perception period of the moderate-to-severe LUTS (n = 110) was affected by BMI; the self-perception period of the mild LUTS (n = 90) was affected by age, income, occupation and concomitant disease. Moderate-to-severe LUTS were affected by self-perception period (p = 0.03). Self-perception period was affected by concern for health (p = 0.005) by multivariate analysis, and self-perception period of mild LUTS was affected by BMI (p = 0.012). Moderate-to-severe LUTS were affected by age, number of family members, concern for health and drinking (p < 0.05, respectively) by multivariate analysis. Lower urinary tract symptom was affected by self-perception period. In moderate-to-severe LUTS, age, concern for health and drinking were affecting factors of self-perception period. © 2012 Blackwell Publishing Ltd.

  19. A dual systems account of visual perception: Predicting candy consumption from distance estimates.

    PubMed

    Krpan, Dario; Schnall, Simone

    2017-04-01

    A substantial amount of evidence shows that visual perception is influenced by forces that control human actions, ranging from motivation to physiological potential. However, studies have not yet provided convincing evidence that perception itself is directly involved in everyday behaviors such as eating. We suggest that this issue can be resolved by employing the dual systems account of human behavior. We tested the link between perceived distance to candies and their consumption for participants who were tired or depleted (impulsive system), versus those who were not (reflective system). Perception predicted eating only when participants were tired (Experiment 1) or depleted (Experiments 2 and 3). In contrast, a rational determinant of behavior-eating restraint towards candies-predicted eating for non-depleted individuals (Experiment 2). Finally, Experiment 3 established that perceived distance was correlated with participants' self-reported motivation to consume candies. Overall, these findings suggest that the dynamics between perception and behavior depend on the interplay of the two behavioral systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Cross-Modal and Intra-Modal Characteristics of Visual Function and Speech Perception Performance in Postlingually Deafened, Cochlear Implant Users

    PubMed Central

    Kim, Min-Beom; Shim, Hyun-Yong; Jin, Sun Hwa; Kang, Soojin; Woo, Jihwan; Han, Jong Chul; Lee, Ji Young; Kim, Martha; Cho, Yang-Sun

    2016-01-01

    Evidence of visual-auditory cross-modal plasticity in deaf individuals has been widely reported. Superior visual abilities of deaf individuals have been shown to result in enhanced reactivity to visual events and/or enhanced peripheral spatial attention. The goal of this study was to investigate the association between visual-auditory cross-modal plasticity and speech perception in post-lingually deafened, adult cochlear implant (CI) users. Post-lingually deafened adults with CIs (N = 14) and a group of normal hearing, adult controls (N = 12) participated in this study. The CI participants were divided into a good performer group (good CI, N = 7) and a poor performer group (poor CI, N = 7) based on word recognition scores. Visual evoked potentials (VEP) were recorded from the temporal and occipital cortex to assess reactivity. Visual field (VF) testing was used to assess spatial attention and Goldmann perimetry measures were analyzed to identify differences across groups in the VF. The association of the amplitude of the P1 VEP response over the right temporal or occipital cortex among three groups (control, good CI, poor CI) was analyzed. In addition, the association between VF by different stimuli and word perception score was evaluated. The P1 VEP amplitude recorded from the right temporal cortex was larger in the group of poorly performing CI users than the group of good performers. The P1 amplitude recorded from electrodes near the occipital cortex was smaller for the poor performing group. P1 VEP amplitude in right temporal lobe was negatively correlated with speech perception outcomes for the CI participants (r = -0.736, P = 0.003). However, P1 VEP amplitude measures recorded from near the occipital cortex had a positive correlation with speech perception outcome in the CI participants (r = 0.775, P = 0.001). In VF analysis, CI users showed narrowed central VF (VF to low intensity stimuli). However, their far peripheral VF (VF to high intensity stimuli) was

  1. Olfactory-visual integration facilitates perception of subthreshold negative emotion.

    PubMed

    Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen

    2015-10-01

    A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  3. McGurk illusion recalibrates subsequent auditory perception

    PubMed Central

    Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  4. Face perception in women with Turner syndrome and its underlying factors.

    PubMed

    Anaki, David; Zadikov Mor, Tal; Gepstein, Vardit; Hochberg, Ze'ev

    2016-09-01

    Turner syndrome (TS) is a chromosomal condition that affects development in females. It is characterized by short stature, ovarian failure and other congenital malformations, due to a partial or complete absence of the sex chromosome. Women with TS frequently suffer from various physical and hormonal dysfunctions, along with impairments in visual-spatial processing and social cognition difficulties. Previous research has also shown difficulties in face and emotion perception. In the current study we examined two questions: First, whether women with TS, that are impaired in face perception, also suffer from deficits in face-specific processes. The second question was whether these face impairments in TS are related to visual-spatial perceptual dysfunctions exhibited by TS individuals, or to impaired social cognition skills. Twenty-six women with TS and 26 control participants were tested on various cognitive and psychological tests to assess visual-spatial perception, face and facial expression perception, and social cognition skills. Results show that women with TS were less accurate in face perception and facial expression processing, yet they exhibited normal face-specific processes (configural and holistic processing). They also showed difficulties in spatial perception and social cognition capacities. Additional analyses revealed that their face perception impairments were related to their deficits in visual-spatial processing. Thus, our results do not support the claim that the impairments in face processing observed in TS are related to difficulties in social cognition. Rather, our data point to the possibility that face perception difficulties in TS stem from visual-spatial impairments and may not be specific to faces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. The Analysis of Reading Skills and Visual Perception Levels of First Grade Turkish Students

    ERIC Educational Resources Information Center

    Memis, Aysel; Sivri, Diler Ayvaz

    2016-01-01

    In this study, primary school first grade students' reading skills and visual perception levels were investigated. Sample of the study, which was designed with relational scanning model, consisted of 168 first grade students studying at three public primary schools in Kozlu, Zonguldak, in 2013-2014 education year. Students' reading level, reading…

  6. Perceptions and Concerns about Inclusive Education among Students with Visual Impairments in Lagos, Nigeria

    ERIC Educational Resources Information Center

    Brydges, Colton; Mkandawire, Paul

    2017-01-01

    This article examines the perceptions of inclusive education in Lagos, Nigeria, based upon in-depth interviews conducted with students with visual impairments during the month of July 2013. The results and discussions are situated within critical disability theory. Despite decades of inclusive education policies, the findings of the study show…

  7. Changing motor perception by sensorimotor conflicts and body ownership

    PubMed Central

    Salomon, R.; Fernandez, N. B.; van Elk, M.; Vachicouras, N.; Sabatier, F.; Tychinskaya, A.; Llobera, J.; Blanke, O.

    2016-01-01

    Experimentally induced sensorimotor conflicts can result in a loss of the feeling of control over a movement (sense of agency). These findings are typically interpreted in terms of a forward model in which the predicted sensory consequences of the movement are compared with the observed sensory consequences. In the present study we investigated whether a mismatch between movements and their observed sensory consequences does not only result in a reduced feeling of agency, but may affect motor perception as well. Visual feedback of participants’ finger movements was manipulated using virtual reality to be anatomically congruent or incongruent to the performed movement. Participants made a motor perception judgment (i.e. which finger did you move?) or a visual perceptual judgment (i.e. which finger did you see moving?). Subjective measures of agency and body ownership were also collected. Seeing movements that were visually incongruent to the performed movement resulted in a lower accuracy for motor perception judgments, but not visual perceptual judgments. This effect was modified by rotating the virtual hand (Exp.2), but not by passively induced movements (Exp.3). Hence, sensorimotor conflicts can modulate the perception of one’s motor actions, causing viewed “alien actions” to be felt as one’s own. PMID:27225834

  8. Perception and performance in flight simulators: The contribution of vestibular, visual, and auditory information

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The pilot's perception and performance in flight simulators is examined. The areas investigated include: vestibular stimulation, flight management and man cockpit information interfacing, and visual perception in flight simulation. The effects of higher levels of rotary acceleration on response time to constant acceleration, tracking performance, and thresholds for angular acceleration are examined. Areas of flight management examined are cockpit display of traffic information, work load, synthetic speech call outs during the landing phase of flight, perceptual factors in the use of a microwave landing system, automatic speech recognition, automation of aircraft operation, and total simulation of flight training.

  9. The Tripartite Model of Risk Perception (TRIRISK): Distinguishing Deliberative, Affective, and Experiential Components of Perceived Risk.

    PubMed

    Ferrer, Rebecca A; Klein, William M P; Persoskie, Alexander; Avishai-Yitshak, Aya; Sheeran, Paschal

    2016-10-01

    Although risk perception is a key predictor in health behavior theories, current conceptions of risk comprise only one (deliberative) or two (deliberative vs. affective/experiential) dimensions. This research tested a tripartite model that distinguishes among deliberative, affective, and experiential components of risk perception. In two studies, and in relation to three common diseases (cancer, heart disease, diabetes), we used confirmatory factor analyses to examine the factor structure of the tripartite risk perception (TRIRISK) model and compared the fit of the TRIRISK model to dual-factor and single-factor models. In a third study, we assessed concurrent validity by examining the impact of cancer diagnosis on (a) levels of deliberative, affective, and experiential risk perception, and (b) the strength of relations among risk components, and tested predictive validity by assessing relations with behavioral intentions to prevent cancer. The tripartite factor structure was supported, producing better model fit across diseases (studies 1 and 2). Inter-correlations among the components were significantly smaller among participants who had been diagnosed with cancer, suggesting that affected populations make finer-grained distinctions among risk perceptions (study 3). Moreover, all three risk perception components predicted unique variance in intentions to engage in preventive behavior (study 3). The TRIRISK model offers both a novel conceptualization of health-related risk perceptions, and new measures that enhance predictive validity beyond that engendered by unidimensional and bidimensional models. The present findings have implications for the ways in which risk perceptions are targeted in health behavior change interventions, health communications, and decision aids.

  10. Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics

    PubMed Central

    Girshick, Ahna R.; Landy, Michael S.; Simoncelli, Eero P.

    2011-01-01

    Humans are remarkably good at performing visual tasks, but experimental measurements reveal substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. But such inference is optimal only if the observer’s internal model matches the environment. Here, we provide evidence that this is the case. We measured performance in an orientation-estimation task, demonstrating the well-known fact that orientation judgements are more accurate at cardinal (horizontal and vertical) orientations, along with a new observation that judgements made under conditions of uncertainty are strongly biased toward cardinal orientations. We estimate observers’ internal models for orientation and find that they match the local orientation distribution measured in photographs. We also show how a neural population could embed probabilistic information responsible for such biases. PMID:21642976

  11. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    PubMed Central

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  12. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment

    PubMed Central

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed. PMID:24904454

  13. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    PubMed

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  14. Understanding the Cognitive and Affective Mechanisms that Underlie Proxy Risk Perceptions among Caregivers of Asthmatic Children.

    PubMed

    Shepperd, James A; Lipsey, Nikolette P; Pachur, Thorsten; Waters, Erika A

    2018-07-01

    Medical decisions made on behalf of another person-particularly those made by adult caregivers for their minor children-are often informed by the decision maker's beliefs about the treatment's risks and benefits. However, we know little about the cognitive and affective mechanisms influencing such "proxy" risk perceptions and about how proxy risk perceptions are related to prominent judgment phenomena. Adult caregivers of minor children with asthma ( N = 132) completed an online, cross-sectional survey assessing 1) cognitions and affects that form the basis of the availability, representativeness, and affect heuristics; 2) endorsement of the absent-exempt and the better-than-average effect; and 3) proxy perceived risk and unrealistic comparative optimism of an asthma exacerbation. We used the Pediatric Asthma Control and Communication Instrument (PACCI) to assess asthma severity. Respondents with higher scores on availability, representativeness, and negative affect indicated higher proxy risk perceptions and (for representativeness only) lower unrealistic optimism, irrespective of asthma severity. Conversely, respondents who showed a stronger display of the better-than-average effect indicated lower proxy risk perceptions but did not differ in unrealistic optimism. The absent-exempt effect was unrelated to proxy risk perceptions and unrealistic optimism. Heuristic judgment processes appear to contribute to caregivers' proxy risk perceptions of their child's asthma exacerbation risk. Moreover, the display of other, possibly erroneous, judgment phenomena is associated with lower caregiver risk perceptions. Designing interventions that target these mechanisms may help caregivers work with their children to reduce exacerbation risk.

  15. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  16. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  17. Blind jealousy? Romantic insecurity increases emotion-induced failures of visual perception.

    PubMed

    Most, Steven B; Laurenceau, Jean-Philippe; Graber, Elana; Belcher, Amber; Smith, C Veronica

    2010-04-01

    Does the influence of close relationships pervade so deeply as to impact visual awareness? Results from two experiments involving heterosexual romantic couples suggest that they do. Female partners from each couple performed a rapid detection task where negative emotional distractors typically disrupt visual awareness of subsequent targets; at the same time, their male partners rated attractiveness first of landscapes, then of photos of other women. At the end of both experiments, the degree to which female partners indicated uneasiness about their male partner looking at and rating other women correlated significantly with the degree to which negative emotional distractors had disrupted their target perception during that time. This relationship was robust even when controlling for individual differences in baseline performance. Thus, emotions elicited by social contexts appear to wield power even at the level of perceptual processing. Copyright 2010 APA, all rights reserved.

  18. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions

    PubMed Central

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H.; Öğmen, Haluk

    2008-01-01

    The 1990s, the “decade of the brain,” witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this “steady-state approach,” more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness. PMID:20517493

  19. Factors Affecting Parent's Perception on Air Quality-From the Individual to the Community Level.

    PubMed

    Guo, Yulin; Liu, Fengfeng; Lu, Yuanan; Mao, Zongfu; Lu, Hanson; Wu, Yanyan; Chu, Yuanyuan; Yu, Lichen; Liu, Yisi; Ren, Meng; Li, Na; Chen, Xi; Xiang, Hao

    2016-05-12

    The perception of air quality significantly affects the acceptance of the public of the government's environmental policies. The aim of this research is to explore the relationship between the perception of the air quality of parents and scientific monitoring data and to analyze the factors that affect parents' perceptions. Scientific data of air quality were obtained from Wuhan's environmental condition reports. One thousand parents were investigated for their knowledge and perception of air quality. Scientific data show that the air quality of Wuhan follows an improving trend in general, while most participants believed that the air quality of Wuhan has deteriorated, which indicates a significant difference between public perception and reality. On the individual level, respondents with an age of 40 or above (40 or above: OR = 3.252; 95% CI: 1.170-9.040), a higher educational level (college and above: OR = 7.598; 95% CI: 2.244-25.732) or children with poor healthy conditions (poor: OR = 6.864; 95% CI: 2.212-21.302) have much more negative perception of air quality. On the community level, industrial facilities, vehicles and city construction have major effects on parents' perception of air quality. Our investigation provides baseline information for environmental policy researchers and makers regarding the public's perception and expectation of air quality and the benefits to the environmental policy completing and enforcing.

  20. Emotional voice and emotional body postures influence each other independently of visual awareness.

    PubMed

    Stienen, Bernard M C; Tanaka, Akihiro; de Gelder, Beatrice

    2011-01-01

    Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.

  1. Pairwise comparisons and visual perceptions of equal area polygons.

    PubMed

    Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R

    2009-02-01

    The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.

  2. A model of attention-guided visual perception and recognition.

    PubMed

    Rybak, I A; Gusakova, V I; Golovan, A V; Podladchikova, L N; Shevtsova, N A

    1998-08-01

    A model of visual perception and recognition is described. The model contains: (i) a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges), and (ii) a high-level subsystem which includes separated 'what' (sensory memory) and 'where' (motor memory) structures. Image recognition occurs during the execution of a 'behavioral recognition program' formed during the primary viewing of the image. The recognition program contains both programmed attention window movements (stored in the motor memory) and predicted image fragments (stored in the sensory memory) for each consecutive fixation. The model shows the ability to recognize complex images (e.g. faces) invariantly with respect to shift, rotation and scale.

  3. Macroscopic brain dynamics during verbal and pictorial processing of affective stimuli.

    PubMed

    Keil, Andreas

    2006-01-01

    Emotions can be viewed as action dispositions, preparing an individual to act efficiently and successfully in situations of behavioral relevance. To initiate optimized behavior, it is essential to accurately process the perceptual elements indicative of emotional relevance. The present chapter discusses effects of affective content on neural and behavioral parameters of perception, across different information channels. Electrocortical data are presented from studies examining affective perception with pictures and words in different task contexts. As a main result, these data suggest that sensory facilitation has an important role in affective processing. Affective pictures appear to facilitate perception as a function of emotional arousal at multiple levels of visual analysis. If the discrimination between affectively arousing vs. nonarousing content relies on fine-grained differences, amplification of the cortical representation may occur as early as 60-90 ms after stimulus onset. Affectively arousing information as conveyed via visual verbal channels was not subject to such very early enhancement. However, electrocortical indices of lexical access and/or activation of semantic networks showed that affectively arousing content may enhance the formation of semantic representations during word encoding. It can be concluded that affective arousal is associated with activation of widespread networks, which act to optimize sensory processing. On the basis of prioritized sensory analysis for affectively relevant stimuli, subsequent steps such as working memory, motor preparation, and action may be adjusted to meet the adaptive requirements of the situation perceived.

  4. Multisensory Rehabilitation Training Improves Spatial Perception in Totally but Not Partially Visually Deprived Children

    PubMed Central

    Cappagli, Giulia; Finocchietti, Sara; Baud-Bovy, Gabriel; Cocchi, Elena; Gori, Monica

    2017-01-01

    Since it has been shown that spatial development can be delayed in blind children, focused sensorimotor trainings that associate auditory and motor information might be used to prevent the risk of spatial-related developmental delays or impairments from an early age. With this aim, we proposed a new technological device based on the implicit link between action and perception: ABBI (Audio Bracelet for Blind Interaction) is an audio bracelet that produces a sound when a movement occurs by allowing the substitution of the visuo-motor association with a new audio-motor association. In this study, we assessed the effects of an extensive but entertaining sensorimotor training with ABBI on the development of spatial hearing in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR). The training required the participants to play several spatial games individually and/or together with the psychomotor therapist 1 h per week for 3 months: the spatial games consisted of exercises meant to train their ability to associate visual and motor-related signals from their body, in order to foster the development of multisensory processes. We measured spatial performance by asking participants to indicate the position of one single fixed (static condition) or moving (dynamic condition) sound source on a vertical sensorized surface. We found that spatial performance of congenitally blind but not low vision children is improved after the training, indicating that early interventions with the use of science-driven devices based on multisensory capabilities can provide consistent advancements in therapeutic interventions, improving the quality of life of children with visual disability. PMID:29097987

  5. Which technology to investigate visual perception in sport: video vs. virtual reality.

    PubMed

    Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit

    2015-02-01

    Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. How does glaucoma look?: patient perception of visual field loss.

    PubMed

    Crabb, David P; Smith, Nicholas D; Glen, Fiona C; Burton, Robyn; Garway-Heath, David F

    2013-06-01

    To explore patient perception of vision loss in glaucoma and, specifically, to test the hypothesis that patients do not recognize their impairment as a black tunnel effect or as black patches in their field of view. Clinic-based cross-sectional study. Fifty patients (age range, 52-82 years) with visual acuity better than 20/30 and with a range of glaucomatous visual field (VF) defects in both eyes, excluding those with very advanced disease (perimetrically blind). Participants underwent monocular VF testing in both eyes using a Humphrey Field Analyzer (HFA; Carl Zeiss Meditec, Dublin, CA; 24-2 Swedish interactive threshold algorithm standard tests) and other tests of visual function. Participants took part in a recorded interview during which they were asked if they were aware of their VF loss; if so, there were encouraged to describe it in their own words. Participants were shown 6 images modified in a variety of ways on a computer monitor and were asked to select the image that most closely represented their perception of their VF loss. Forced choice of an image best representing glaucomatous vision impairment. Participants had a range of VF defect severity: average HFA mean deviation was -8.7 dB (standard deviation [SD], 5.8 dB) and -10.5 dB (SD, 7.1 dB) in the right and left eyes, respectively. Thirteen patients (26%; 95% confidence interval [CI], 15%-40%) reported being completely unaware of their vision loss. None of the patients chose the images with a distinct black tunnel effect or black patches. Only 2 patients (4%; 95% CI, 0%-14%) chose the image with a tunnel effect with blurred edges. An image depicting blurred patches and another with missing patches was chosen by 54% (95% CI, 39%-68%) and 16% (95% CI, 7%-29%) of the patients, respectively. Content analysis of the transcripts from the recorded interviews indicated a frequent use of descriptors of visual symptoms associated with reported blur and missing features. Patients with glaucoma do not perceive

  7. A method for real-time visual stimulus selection in the study of cortical object perception.

    PubMed

    Leeds, Daniel D; Tarr, Michael J

    2016-06-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright

  8. A method for real-time visual stimulus selection in the study of cortical object perception

    PubMed Central

    Leeds, Daniel D.; Tarr, Michael J.

    2016-01-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168

  9. An effective visualization technique for depth perception in augmented reality-based surgical navigation.

    PubMed

    Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung

    2016-03-01

    Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Aesthetic valence of visual illusions

    PubMed Central

    Stevanov, Jasmina; Marković, Slobodan; Kitaoka, Akiyoshi

    2012-01-01

    Visual illusions constitute an interesting perceptual phenomenon, but they also have an aesthetic and affective dimension. We hypothesized that the illusive nature itself causes the increased aesthetic and affective valence of illusions compared with their non-illusory counterparts. We created pairs of stimuli. One qualified as a standard visual illusion whereas the other one did not, although they were matched in as many perceptual dimensions as possible. The phenomenal quality of being an illusion had significant effects on “Aesthetic Experience” (fascinating, irresistible, exceptional, etc), “Evaluation” (pleasant, cheerful, clear, bright, etc), “Arousal” (interesting, imaginative, complex, diverse, etc), and “Regularity” (balanced, coherent, clear, realistic, etc). A subsequent multiple regression analysis suggested that Arousal was a better predictor of Aesthetic Experience than Evaluation. The findings of this study demonstrate that illusion is a phenomenal quality of the percept which has measurable aesthetic and affective valence. PMID:23145272

  11. Visual presentation of a medical physiology seminar modifies dental students' perception of its clinical significance.

    PubMed

    Vuletic, L; Spalj, S; Peros, K

    2016-02-01

    The primary objective of this study was to assess whether exposing dental students to visual stimuli related to dental profession during the medical physiology seminar could affect their perception of the clinical relevance of the topic. A self-administered questionnaire on attitudes towards medical physiology was conducted amongst 105 students of the School of Dental Medicine in Zagreb, Croatia, aged 19-24 years (80% females) following a seminar on respiratory system physiology. Power-point presentation accompanying the seminar for a total of 52 students (study group) was enriched with pictures related to dental practice in order to assess whether these pictures could make the topic appear more clinically relevant for a future dentist. The results of the survey indicated that dental students in the study group perceived the topic of the seminar as more important for them as future dentists when compared to the perception of the control group (P = 0.025). The results of this survey encourage physiology lecturers to present medical physiology as clinically relevant for dental students whenever possible as this could increase students' interest in the subject and their motivation for learning. Such an approach could be particularly beneficial if there is a significant time gap between basic courses and involvement of students into clinical training for it could promote meaningful learning. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Experience affects the use of ego-motion signals during 3D shape perception

    PubMed Central

    Jain, Anshul; Backus, Benjamin T.

    2011-01-01

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132

  13. Experience affects the use of ego-motion signals during 3D shape perception.

    PubMed

    Jain, Anshul; Backus, Benjamin T

    2010-12-29

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the "stationarity prior," is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers' stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity.

  14. Identification of Visual Cues and Quantification of Drivers' Perception of Proximity Risk to the Lead Vehicle in Car-Following Situations

    NASA Astrophysics Data System (ADS)

    Kondoh, Takayuki; Yamamura, Tomohiro; Kitazaki, Satoshi; Kuge, Nobuyuki; Boer, Erwin Roeland

    Longitudinal vehicle control and/or warning technologies that operate in accordance with drivers' subjective perception of risk need to be developed for driver-support systems, if such systems are to be used fully to achieve safer, more comfortable driving. In order to accomplish this goal, it is necessary to identify the visual cues utilized by drivers in their perception of risk when closing on the vehicle ahead in a car-following situation. It is also necessary to quantify the relation between the physical parameters defining the spatial relationship to the vehicle ahead and psychological metrics with regard to the risk perceived by the driver. This paper presents the results of an empirical study on quantification and formulization of drivers' subjective perception of risk based on experiments performed with a fixed-base driving simulator at the Nissan Research Center. Experiments were carried out to investigate the subjective perception of risk relative to the headway distance and closing velocity to the vehicle ahead using the magnitude estimation method. The experimental results showed that drivers' perception of risk was strongly affected by two variables: time headway, i.e., the distance to the lead vehicle divided by the following vehicle's velocity, and time to collision, i.e., the distance to the lead vehicle divided by relative velocity. It was also found that an equation for estimating drivers' perception of risk can be formulated as the summation of the time headway inverse and the time to collision inverse and that this expression can be applied to various approaching situations. Furthermore, the validity of this equation was examined based on real-world driver behavior data measured with an instrumented vehicle.

  15. The Use of Virtual Reality in Psychology: A Case Study in Visual Perception

    PubMed Central

    Wilson, Christopher J.; Soranzo, Alessandro

    2015-01-01

    Recent proliferation of available virtual reality (VR) tools has seen increased use in psychological research. This is due to a number of advantages afforded over traditional experimental apparatus such as tighter control of the environment and the possibility of creating more ecologically valid stimulus presentation and response protocols. At the same time, higher levels of immersion and visual fidelity afforded by VR do not necessarily evoke presence or elicit a “realistic” psychological response. The current paper reviews some current uses for VR environments in psychological research and discusses some ongoing questions for researchers. Finally, we focus on the area of visual perception, where both the advantages and challenges of VR are particularly salient. PMID:26339281

  16. Predicting perceived visual complexity of abstract patterns using computational measures: The influence of mirror symmetry on complexity perception

    PubMed Central

    Leder, Helmut

    2017-01-01

    Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832

  17. Visual perceptions of male obesity: a cross-cultural study examining male and female lay perceptions of obesity in Caucasian males.

    PubMed

    Robinson, Eric; Hogenkamp, Pleunie S

    2015-05-16

    Obesity is now common and this may have altered visual perceptions of what constitutes a 'normal' and therefore healthy weight. The present study examined cross-cultural differences in male and female participants' ability to visually identify the weight status of photographed Caucasian males. Five hundred and fifty three male and female young adults from the US (high obesity prevalence), UK and Sweden (lower obesity prevalence) participated in an online study. Participants judged the weight status of a series of photographed healthy weight, overweight and obese (class I) Caucasian males and rated the extent to which they believed each male should consider losing weight. There was a strong tendency for both male and female participants to underestimate the weight status of the photographed overweight and obese males. Photographed males were frequently perceived as being of healthier weight than they actually were. Some modest cross-cultural differences were also observed; US participants were worse at recognising obesity than UK participants (p < 0.05) and were also significantly more likely to believe that the photographed obese males did not need to consider losing weight, in comparison to both the UK and Swedish participants (ps < 0.05). No cross-cultural differences were observed for perceptions or attitudes towards the photographed healthy weight or overweight males. The weight status of overweight and obese (class I) Caucasian males is underestimated when judged by males and females using visual information alone. This study provides initial evidence of modest cross-cultural differences in attitudes toward, and the ability to recognise, obesity in Caucasian males.

  18. A neural basis for the spatial suppression of visual motion perception

    PubMed Central

    Liu, Liu D; Haefner, Ralf M; Pack, Christopher C

    2016-01-01

    In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.16167.001 PMID:27228283

  19. Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.

    PubMed

    Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris

    2016-05-04

    Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Camouflage and visual perception

    PubMed Central

    Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt

    2008-01-01

    How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671

  1. The Relationship of Motor Coordination, Visual Perception, and Executive Function to the Development of 4–6-Year-Old Chinese Preschoolers' Visual Motor Integration Skills

    PubMed Central

    Fang, Ying; Zhang, Ying

    2017-01-01

    Visual motor integration (VMI) is a vital ability in childhood development, which is associated with the performance of many functional skills. By using the Beery Developmental Test Package and Executive Function Tasks, the present study explored the VMI development and its factors (visual perception, motor coordination, and executive function) among 151 Chinese preschoolers from 4 to 6 years. Results indicated that the VMI skills of children increased quickly at 4 years and peaked at 5 years and decreased at around 5 to 6 years. Motor coordination and cognitive flexibility were related to the VMI development of children from 4 to 6 years. Visual perception was associated with the VMI development at early 4 years and inhibitory control was also associated with it among 4-year-old and the beginning of 5-year-old children. Working memory had no impact on the VMI. In conclusion, the development of VMI skills among children in preschool was not stable but changed dynamically in this study. Meanwhile the factors of the VMI worked in different age range for preschoolers. These findings may give some guidance to researchers or health professionals on improving children's VMI skills in their early childhood. PMID:29457030

  2. How to Make a Good Animation: A Grounded Cognition Model of How Visual Representation Design Affects the Construction of Abstract Physics Knowledge

    ERIC Educational Resources Information Center

    Chen, Zhongzhou; Gladding, Gary

    2014-01-01

    Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition,…

  3. Attention Increases Spike Count Correlations between Visual Cortical Areas.

    PubMed

    Ruff, Douglas A; Cohen, Marlene R

    2016-07-13

    Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary

  4. Attention Increases Spike Count Correlations between Visual Cortical Areas

    PubMed Central

    Cohen, Marlene R.

    2016-01-01

    Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously

  5. Enhanced Sensitivity to Subphonemic Segments in Dyslexia: A New Instance of Allophonic Perception

    PubMed Central

    Serniclaes, Willy; Seck, M’ballo

    2018-01-01

    Although dyslexia can be individuated in many different ways, it has only three discernable sources: a visual deficit that affects the perception of letters, a phonological deficit that affects the perception of speech sounds, and an audio-visual deficit that disturbs the association of letters with speech sounds. However, the very nature of each of these core deficits remains debatable. The phonological deficit in dyslexia, which is generally attributed to a deficit of phonological awareness, might result from a specific mode of speech perception characterized by the use of allophonic (i.e., subphonemic) units. Here we will summarize the available evidence and present new data in support of the “allophonic theory” of dyslexia. Previous studies have shown that the dyslexia deficit in the categorical perception of phonemic features (e.g., the voicing contrast between /t/ and /d/) is due to the enhanced sensitivity to allophonic features (e.g., the difference between two variants of /d/). Another consequence of allophonic perception is that it should also give rise to an enhanced sensitivity to allophonic segments, such as those that take place within a consonant cluster. This latter prediction is validated by the data presented in this paper. PMID:29587419

  6. Evidence for distinct mechanisms underlying attentional priming and sensory memory for bistable perception.

    PubMed

    Brinkhuis, M A B; Kristjánsson, Á; Brascamp, J W

    2015-08-01

    Attentional selection in visual search paradigms and perceptual selection in bistable perception paradigms show functional similarities. For example, both are sensitive to trial history: They are biased toward previously selected targets or interpretations. We investigated whether priming by target selection in visual search and sensory memory for bistable perception are related. We did this by presenting two trial types to observers. We presented either ambiguous spheres that rotated over a central axis and could be perceived as rotating in one of two directions, or search displays in which the unambiguously rotating target and distractor spheres closely resembled the two possible interpretations of the ambiguous stimulus. We interleaved both trial types within experiments, to see whether priming by target selection during search trials would affect the perceptual outcome of bistable perception and, conversely, whether sensory memory during bistable perception would affect target selection times during search. Whereas we found intertrial repetition effects among consecutive search trials and among consecutive bistable trials, we did not find cross-paradigm effects. Thus, even though we could ascertain that our experiments robustly elicited processes of both search priming and sensory memory for bistable perception, these same experiments revealed no interaction between the two.

  7. Perceptions of submissiveness: implications for victimization.

    PubMed

    Richards, L; Rollerson, B; Phillips, J

    1991-07-01

    Some researchers have suggested that a precondition of affective submissiveness may increase the likelihood of female victimization in sexual assault, whereas others have suggested that criminal offenders use perceptions of vulnerability when selecting a victim. In this study, based on American college students, men (decoders) rated videotaped women (encoders) dominant versus submissive using a semantic differential instrument. Cue evaluators analyzed the body language and appearance of the videotaped women using a Likert instrument. The results suggest that (a) men form differentiated perceptions of dominant versus submissive women, (b) such perceptions substantially rely on nonverbal cues, (c) dominant and submissive women display visually different behaviors and appearances, and (d) men tend to select submissive females for exploitation.

  8. Voluntarily controlled but not merely observed visual feedback affects postural sway

    PubMed Central

    Asai, Tomohisa; Hiromitsu, Kentaro; Imamizu, Hiroshi

    2018-01-01

    Online stabilization of human standing posture utilizes multisensory afferences (e.g., vision). Whereas visual feedback of spontaneous postural sway can stabilize postural control especially when observers concentrate on their body and intend to minimize postural sway, the effect of intentional control of visual feedback on postural sway itself remains unclear. This study assessed quiet standing posture in healthy adults voluntarily controlling or merely observing visual feedback. The visual feedback (moving square) had either low or high gain and was either horizontally flipped or not. Participants in the voluntary-control group were instructed to minimize their postural sway while voluntarily controlling visual feedback, whereas those in the observation group were instructed to minimize their postural sway while merely observing visual feedback. As a result, magnified and flipped visual feedback increased postural sway only in the voluntary-control group. Furthermore, regardless of the instructions and feedback manipulations, the experienced sense of control over visual feedback positively correlated with the magnitude of postural sway. We suggest that voluntarily controlled, but not merely observed, visual feedback is incorporated into the feedback control system for posture and begins to affect postural sway. PMID:29682421

  9. Visual Perception and Frontal Lobe in Intellectual Disabilities: A Study with Evoked Potentials and Neuropsychology

    ERIC Educational Resources Information Center

    Munoz-Ruata, J.; Caro-Martinez, E.; Perez, L. Martinez; Borja, M.

    2010-01-01

    Background: Perception disorders are frequently observed in persons with intellectual disability (ID) and their influence on cognition has been discussed. The objective of this study is to clarify the mechanisms behind these alterations by analysing the visual event related potentials early component, the N1 wave, which is related to perception…

  10. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    PubMed Central

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role. PMID:26042055

  11. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory.

    PubMed

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: 'element motion' (EM) or 'group motion' (GM). In "EM," the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in "GM," both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms) in the long glide was perceived to be shorter than that within both the short glide and the 'gap-transfer' auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.

  12. How and what do autistic children see? Emotional, perceptive and social peculiarities reflected in more recent examinations of the visual perception and the process of observation in autistic children.

    PubMed

    Dalferth, M

    1989-01-01

    Autistic symptoms become apparent at the earliest during the 2nd-3rd month of life when the spontaneous registration of the meaning of specific-visual stimuli (eyes, configuration of the mother's face) do not occur and also learning experiences by reason of mimic and gestures repeatedly shown by the interaction partner can neither evoke a social smile nor stimulate anticipational behaviour. Even with increasing age an empathetic perception of feelings in the corresponding mimical gesticular formation is very difficult and they themselves are only insufficiently able to express their own feelings intelligibly to everyone. As mimic and gestures are, however, visually perceived, the autistic perceptive child's competence is of great importance. On the basis of the examinations of visual perception (retinal pathology, tunnel vision) perceptual processing (recognition of feelings, sex and age) and the disintegration of multimodal stimuli it can be presumed that social and emotional deficits are to be seen in connection with a deviant perceptive interpretation of the world and irregular processing on the basis of a neuro-biological handicap (the absence of a genetic determined reference-system for emotionally significant stimuli), which can have various causes (comp. Gillberg 1988) and also impede the adequate expression of feelings in mimic, gestures and voice. Autistic people see, experience and understand the world in a specific way in which and by which they differ from non-handicapped people.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Understanding Visible Perception

    NASA Technical Reports Server (NTRS)

    2003-01-01

    One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.

  14. Real-world spatial regularities affect visual working memory for objects.

    PubMed

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V

    2015-12-01

    Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.

  15. Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval.

    PubMed

    Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin

    2016-04-01

    There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of student volunteers (N = 64) aged from 17 to 25 years were shown a Powerpoint presentation of 21 target language words with a picture, audio, and written form for every word. The vocabulary was presented in comfortable rooms with padded chairs and the participants were provided with snacks so that they would be comfortable and relaxed. After the Powerpoint they were exposed to two forms of visual stimuli for 27 min. The different formats contained either visually affective content (sexually suggestive, violent or frightening material) or neutral content (a nature documentary). The group which was exposed to the emotive visual stimuli remembered significantly fewer words than the group which watched the emotively neutral nature documentary. Implications of this finding are discussed and suggestions made for ongoing research.

  16. Early visual responses predict conscious face perception within and between subjects during binocular rivalry

    PubMed Central

    Sandberg, Kristian; Bahrami, Bahador; Kanai, Ryota; Barnes, Gareth Robert; Overgaard, Morten; Rees, Geraint

    2014-01-01

    Previous studies indicate that conscious face perception may be related to neural activity in a large time window around 170-800ms after stimulus presentation, yet in the majority of these studies changes in conscious experience are confounded with changes in physical stimulation. Using multivariate classification on MEG data recorded when participants reported changes in conscious perception evoked by binocular rivalry between a face and a grating, we showed that only MEG signals in the 120-320ms time range, peaking at the M170 around 180ms and the P2m at around 260ms, reliably predicted conscious experience. Conscious perception could not only be decoded significantly better than chance from the sensors that showed the largest average difference, as previous studies suggest, but also from patterns of activity across groups of occipital sensors that individually were unable to predict perception better than chance. Additionally, source space analyses showed that sources in the early and late visual system predicted conscious perception more accurately than frontal and parietal sites, although conscious perception could also be decoded there. Finally, the patterns of neural activity associated with conscious face perception generalized from one participant to another around the times of maximum prediction accuracy. Our work thus demonstrates that the neural correlates of particular conscious contents (here, faces) are highly consistent in time and space within individuals and that these correlates are shared to some extent between individuals. PMID:23281780

  17. Validity of the growth model of the 'computerized visual perception assessment tool for Chinese characters structures'.

    PubMed

    Wu, Huey-Min; Li, Cheng-Hsaun; Kuo, Bor-Chen; Yang, Yu-Mao; Lin, Chin-Kai; Wan, Wei-Hsiang

    2017-08-01

    Morphological awareness is the foundation for the important developmental skills involved with vocabulary, as well as understanding the meaning of words, orthographic knowledge, reading, and writing. Visual perception of space and radicals in two-dimensional positions of Chinese characters' morphology is very important in identifying Chinese characters. The important predictive variables of special and visual perception in Chinese characters identification were investigated in the growth model in this research. The assessment tool is the "Computerized Visual Perception Assessment Tool for Chinese Characters Structures" developed by this study. There are two constructs, basic stroke and character structure. In the basic stroke, there are three subtests of one, two, and more than three strokes. In the character structure, there are three subtests of single-component character, horizontal-compound character, and vertical-compound character. This study used purposive sampling. In the first year, 551 children 4-6 years old participated in the study and were monitored for one year. In the second year, 388 children remained in the study and the successful follow-up rate was 70.4%. This study used a two-wave cross-lagged panel design to validate the growth model of the basic stroke and the character structure. There was significant correlation of the basic stroke and the character structure at different time points. The abilities in the basic stroke and in the character structure steadily developed over time for preschool children. Children's knowledge of the basic stroke effectively predicted their knowledge of the basic stroke and the character structure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Visual perception and consciousness in dermatopathology: mechanisms of figure-ground segregation account for errors in diagnosis.

    PubMed

    Böer, Almut

    2009-02-01

    Visual perception has been the object of research in psychology for almost a century. Little has been written, however, about the effects of perceptive phenomena on methods in medicine that utilize interpretation of two-dimensional images for diagnosis. Starting from the work by Edgar Rubin in the beginning of the last century, this article gives a summary of observations of psychologists who investigated the mechanisms of so-called "figure-ground segregation." These unconscious mechanisms follow rules that explain why certain structures are perceived consciously as a figure, whereas other structures surrounding such a figure are neglected and not perceived consciously in detail. Perception of a structure as a figure can be due to, for example, a convex shape of its contour, proximity of lines around it, closed contours, a simple shape, and attribution of meaning to a structure. In examples from the practice of dermatopathology, those unconscious mechanisms of figure-ground segregation will be shown to be relevant to diagnosis of sections of tissue. The mechanisms help to explain why, for example, ill-defined and concave-shaped structures, stromal differences of neoplasms, interstitial infiltrates and deposits, and simulators of common diseases are often difficult to recognize at first sight. Teachers of dermatopathology need to be aware of these unconscious mechanisms of visual perception because they explain why novices struggle with certain diagnoses and differential diagnoses. Proper instruction about these phenomena, early in the process of training, will prevent a student from being frustrated with misperceptions.

  19. The Conductor As Visual Guide: Gesture and Perception of Musical Content

    PubMed Central

    Kumar, Anita B.; Morrison, Steven J.

    2016-01-01

    Ensemble conductors are often described as embodying the music. Researchers have determined that expressive gestures affect viewers’ perceptions of conducted ensemble performances. This effect may be due, in part, to conductor gesture delineating and amplifying specific expressive aspects of music performances. The purpose of the present study was to determine if conductor gesture affected observers’ focus of attention to contrasting aspects of ensemble performances. Audio recordings of two different music excerpts featuring two-part counterpoint (an ostinato paired with a lyric melody, and long chord tones paired with rhythmic interjections) were paired with video of two conductors. Each conductor used gesture appropriate to one or the other musical element (e.g., connected and flowing or detached and crisp) for a total of sixteen videos. Musician participants evaluated 8 of the excerpts for Articulation, Rhythm, Style, and Phrasing using four 10-point differential scales anchored by descriptive terms (e.g., disconnected to connected, and angular to flowing.) Results indicated a relationship between gesture and listeners’ evaluations of musical content. Listeners appear to be sensitive to the manner in which a conductor’s gesture delineates musical lines, particularly as an indication of overall articulation and style. This effect was observed for the lyric melody and ostinato excerpt, but not for the chords and interjections excerpt. Therefore, this effect appears to be mitigated by the congruence of gesture to preconceptions of the importance of melodic over rhythmic material, of certain instrument timbres over others, and of length between onsets of active material. These results add to a body of literature that supports the importance of the visual component in the multimodal experience of music performance. PMID:27458425

  20. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    PubMed

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.