Winer, G A; Cottrell, J E; Karefilaki, K D; Chronister, M
Children and adults were tested on their beliefs about whether visual processes involved intromissions (visual input) or extramissions (visual output) across a variety of situations. The idea that extramissions are part of the process of vision was first expressed by ancient philosophers, including Plato, Euclid, and Ptolemy and has been shown to be evident in children and in some adults. The present research showed that when questions about vision referred to luminous as opposed to non-luminous objects, under certain conditions there was some increase in intromission beliefs, but almost no corresponding decline in extramission beliefs, and no evidence of transfer of intromission responses to questions referring to nonluminous objects. A separate study showed that college students, but not children, increased their extramission responses to questions providing a positive emotional context. The results are inconsistent with the idea that simple experiences increase or reinforce a coherent theory of vision. The results also have implications for understanding the nature of beliefs about scientific processes and for education.
Lieberman, Laurence M.
Disfunctions are drawn between visual perception and visual function, and four optometrists respond with further analysis of the visual perception-visual function controversy and its implications for children with learning problems. (CL)
Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji
The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.
Stone, Anna; Valentine, Tim
Participants who were unable to detect familiarity from masked 17 ms faces (Stone and Valentine, 2004 and Stone and Valentine, in press-b) did report a vague, partial visual percept. Two experiments investigated the relative strength of the visual percept generated by famous and unfamiliar faces, using masked 17 ms exposure. Each trial presented simultaneously a famous and an unfamiliar face, one face in LVF and the other in RVF. In one task, participants responded according to which of the faces generated the stronger visual percept, and in the other task, they attempted an explicit familiarity decision. The relative strength of the visual percept of the famous face compared to the unfamiliar face was moderated by response latency and participants' attitude towards the famous person. There was also an interaction of visual field with response latency, suggesting that the right hemisphere can generate a visual percept differentiating famous from unfamiliar faces more rapidly than the left hemisphere. Participants were at chance in the explicit familiarity decision, confirming the absence of awareness of facial familiarity.
This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…
Blanchfield, Anthony; Hardy, James; Marcora, Samuele
The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014
Harley, Erin M; Dillon, Allyss M; Loftus, Geoffrey R
Processing visually degraded stimuli is a common experience. We struggle to find house keys on dim front porches, to decipher slides projected in overly bright seminar rooms, and to read 10th-generation photocopies. In this research, we focus specifically on stimuli that are degraded via reduction of stimulus contrast and address two questions. First, why is it difficult to process low-contrast, as compared with high-contrast, stimuli? Second, is the effect of contrast fundamental in that its effect is independent of the stimulus being processed and the reason for processing the stimulus? We formally address and answer these questions within the context of a series of nested theories, each providing a successively stronger definition of what it means for contrast to affect perception and memory. To evaluate the theories, we carried out six experiments. Experiments 1 and 2 involved simple stimuli (randomly generated forms and digit strings), whereas Experiments 3-6 involved naturalistic pictures (faces, houses, and cityscapes). The stimuli were presented at two contrast levels and at varying exposure durations. The data from all the experiments allow the conclusion that some function of stimulus contrast combines multiplicatively with stimulus duration at a stage prior to that at which the nature of the stimulus and the reason for processing it are determined, and it is the result of this multiplicative combination that determines eventual memory performance. We describe a stronger version of this theory--the sensory response, information acquisition theory--which has at its core, the strong Bloch's-law-like assumption of a fundamental visual system response that is proportional to the product of stimulus contrast and stimulus duration. This theory was, as it has been in the past, highly successful in accounting for memory for simple stimuli shown at short (i.e., shorter than an eye fixation) durations. However, it was less successful in accounting for data from
Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András
The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities.
Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Mikhaĭlova, E S
In 34 healthy subjects we have analyzed accuracy and reaction time (RT) during the recognition of complex visual images: pictures of animals and non-living objects. The target stimuli were preceded by brief presentation of masking non-target ones, which represented drawings of emotional (angry, fearful, happy) or neutral faces. We have revealed that in contrast to accuracy the RT depended on the emotional expression of the preceding faces. RT was significantly shorter if the target objects were paired with the angry and fearful faces as compared with the happy and neutral ones. These effects depended on the category of the target stimulus and were more prominent for objects than for animals. Further, the emotional faces' effects were determined by emotional and communication personality traits (defined by Cattell's Questionnaire) and were clearer defined in more sensitive, anxious and pessimistic introverts. The data are important for understanding the mechanisms of human visual behavior determination by non-consciously processing of emotional information.
Yilmaz, Resul; Erkorkmaz, Ünal; Ozcetin, Mustafa; Karaaslan, Erhan
Introducción: El estilo de alimentación es uno de los factores prominentes que determina la ingesta de energía. Uno de los factores que influyen en el estilo de alimentación paterna es la percepción de los padres del estado de peso del niño. Objetivo: El propósito de este estudio fue evaluar la relación entre la percepción visual de la madre del estado de peso de su hijo y su estilo de alimentación. Método: Se realizó un estudio transversal con madres de 380 niños preescolares de 5 a 7 (6,14 años). Las puntuaciones de la percepción visual se midieron mediante unos dibujos y el estilo de alimentación materna se medió con el cuestionario validado “Parental Feeding Style Questionnaire”. Resultados: Las puntuaciones de las subescalas de las dimensiones de alimentación parental “alimentación emocional” y “animar a comer” eran bajas en niños con sobrepeso de acuerdo con la clasificación de la percepción visual. Las puntuaciones de las subescalas “alimentación emocional” y “control permisivo” eran estadísticamente distintas en los niños clasificados como correctamente percibidos e incorrectamente percibidos bajos por una mala percepción materna. Conclusión: Diversos estilos de alimentación se relacionaban con la percepción visual materna. El mejor abordaje para evitar la obesidad y el peso bajo podría estar en centrarse en conseguir una correcta percepción parental del estado de peso de sus hijos, mejorando así las habilidades paternas y conllevando la implantación de unos estilos de alimentación adecuados.
Rigutti, Sara; Gerbino, Walter
Fantoni & Gerbino (2014) showed that subtle postural shifts associated with reaching can have a strong hedonic impact and affect how actors experience facial expressions of emotion. Using a novel Motor Action Mood Induction Procedure (MAMIP), they found consistent congruency effects in participants who performed a facial emotion identification task after a sequence of visually-guided reaches: a face perceived as neutral in a baseline condition appeared slightly happy after comfortable actions and slightly angry after uncomfortable actions. However, skeptics about the penetrability of perception (Zeimbekis & Raftopoulos, 2015) would consider such evidence insufficient to demonstrate that observer’s internal states induced by action comfort/discomfort affect perception in a top-down fashion. The action-modulated mood might have produced a back-end memory effect capable of affecting post-perceptual and decision processing, but not front-end perception. Here, we present evidence that performing a facial emotion detection (not identification) task after MAMIP exhibits systematic mood-congruent sensitivity changes, rather than response bias changes attributable to cognitive set shifts; i.e., we show that observer’s internal states induced by bodily action can modulate affective perception. The detection threshold for happiness was lower after fifty comfortable than uncomfortable reaches; while the detection threshold for anger was lower after fifty uncomfortable than comfortable reaches. Action valence induced an overall sensitivity improvement in detecting subtle variations of congruent facial expressions (happiness after positive comfortable actions, anger after negative uncomfortable actions), in the absence of significant response bias shifts. Notably, both comfortable and uncomfortable reaches impact sensitivity in an approximately symmetric way relative to a baseline inaction condition. All of these constitute compelling evidence of a genuine top-down effect on
Zhang, J; Wu, S Y
The response properties of a class of motion detectors (Reichardt detectors) are investigated extensively here. Since the outputs of the detectors, responding to an image undergoing two-dimensional rigid translation, are dependent on both the image velocity and the image intensity distribution, they are nonuniform across the entire image, even though the object is moving rigidly as a whole. To achieve perceptual "oneness" in the rigid motion, we are led to contend that visual perception must take place in a space that is non-Euclidean in nature. We then derive the affine connection and the metric of this perceptual space. The Riemann curvature tensor is identically zero, which means that the perceptual space is intrinsically flat. A geodesic in this space is composed of points of constant image intensity gradient along a certain direction. The deviation of geodesics (which are perceptually "straight") from physically straight lines may offer an explanation to the perceptual distortion of angular relationships such as the Hering illusion. PMID:2235999
Webster, Michael A.; MacLeod, Donald I. A.
The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555
Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt
How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671
Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark
Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…
Described are how pictures can combine aspects of naturalistic representation with more formal shapes to enhance cognitive understanding. These "diagrammatic" shapes derive from geometrical elementary and thereby bestow visual concreteness to concepts conveyed by the pictures. Leonardo da Vinci's anatomical drawings are used as examples…
Chow, K. L.
The receptive fields of single cells in the visual system of cat and squirrel monkey were studied investigating the vestibular input affecting the cells, and the cell's responses during visual discrimination learning process. The receptive field characteristics of the rabbit visual system, its normal development, its abnormal development following visual deprivation, and on the structural and functional re-organization of the visual system following neo-natal and prenatal surgery were also studied. The results of each individual part of each investigation are detailed.
Kaufman, Lloyd; Kaufman, James H.
Research into visual perception ultimately affects display design. Advance in display technology affects, in turn, our study of perception. Although this statement is too general to provide controversy, this paper present a real-life example that may prompt display engineers to make greater use of basic knowledge of visual perception, and encourage those who study perception to track more closely leading edge display technology. Our real-life example deals with an ancient problem, the moon illusion: why does the horizon moon appear so large while the elevated moon look so small. This was a puzzle for many centuries. Physical explanations, such as refraction by the atmosphere, are incorrect. The difference in apparent size may be classified as a misperception, so the answer must lie in the general principles of visual perception. The factors underlying the moon illusion must be the same factors as those that enable us to perceive the sizes of ordinary objects in visual space. Progress toward solving the problem has been irregular, since methods for actually measuring the illusion under a wide range of conditions were lacking. An advance in display technology made possible a serious and methodologically controlled study of the illusion. This technology was the first heads-up display. In this paper we will describe how the heads-up display concept made it possible to test several competing theories of the moon illusion, and how it led to an explanation that stood for nearly 40 years. We also consider the criticisms of that explanation and how the optics of the heads-up display also played a role in providing data for the critics. Finally, we will describe our own advance on the original methodology. This advance was motivated by previously unrelated principles of space perception. We used a stereoscopic heads up display to test alternative hypothesis about the illusion and to discrimate between two classes of mutually contradictory theories. At its core, the
Ando, Soichi; Kokubu, Masahiro; Nakae, Satoshi; Kimura, Misaka; Hojo, Tatsuya; Ebine, Naoyuki
Strenuous exercise may have the detrimental effects on visual perception. However, it is unclear whether visual resolution is related to the detrimental effects on visual perception. The purpose of this study was to examine whether the effects of strenuous exercise on visual perception are dependent on visual resolution. Given that visual resolution decreases in the periphery of the visual field, we hypothesized that if visual resolution plays a role in the detrimental effects on visual perception, the detrimental effects may be exaggerated toward the periphery of the visual field. Simple visual reaction time was measured at rest and during cycling at 40% and 75% peak oxygen uptakes (VO(2)). Visual stimuli were randomly presented at 2°, 10°, 30°, and 50° to either the right or left of the midpoint between the eyes with equal probability. RT was fractionated into premotor and motor components (i.e. premotor time and motor time) based on electromyographic recording. The premotor time during exercise at 40% peak VO(2) was not different from that at rest. In contrast, the premotor time during exercise at 75% peak VO(2) was significantly longer than that at rest (p=0.018). The increase in the premotor time was observed irrespective of eccentricity, and the detrimental effects were not exaggerated toward the periphery of the visual field. The motor time was not affected by exercise. The current findings suggest that the detrimental effects of strenuous exercise on visual perception are independent of visual resolution.
Butz, Martin V; Kutter, Esther F; Lorenz, Corinna
The Rubber Hand Illusion (RHI) is a well-established experimental paradigm. It has been shown that the RHI can affect hand location estimates, arm and hand motion towards goals, the subjective visual appearance of the own hand, and the feeling of body ownership. Several studies also indicate that the peri-hand space is partially remapped around the rubber hand. Nonetheless, the question remains if and to what extent the RHI can affect the perception of other body parts. In this study we ask if the RHI can alter the perception of the elbow joint. Participants had to adjust an angular representation on a screen according to their proprioceptive perception of their own elbow joint angle. The results show that the RHI does indeed alter the elbow joint estimation, increasing the agreement with the position and orientation of the artificial hand. Thus, the results show that the brain does not only adjust the perception of the hand in body-relative space, but it also modifies the perception of other body parts. In conclusion, we propose that the brain continuously strives to maintain a consistent internal body image and that this image can be influenced by the available sensory information sources, which are mediated and mapped onto each other by means of a postural, kinematic body model.
Cseh, Genevieve M; Phillips, Louise H; Pearson, David G
Flow (being in the zone) is purported to have positive consequences in terms of affect and performance; however, there is no empirical evidence about these links in visual creativity. Positive affect often--but inconsistently--facilitates creativity, and both may be linked to experiencing flow. This study aimed to determine relationships between these variables within visual creativity. Participants performed the creative mental synthesis task to simulate the creative process. Affect change (pre- vs. post-task) and flow were measured via questionnaires. The creativity of synthesis drawings was rated objectively and subjectively by judges. Findings empirically demonstrate that flow is related to affect improvement during visual creativity. Affect change was linked to productivity and self-rated creativity, but no other objective or subjective performance measures. Flow was unrelated to all external performance measures but was highly correlated with self-rated creativity; flow may therefore motivate perseverance towards eventual excellence rather than provide direct cognitive enhancement.
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.
Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.
Corbett, Jennifer E.; Song, Joo-Hyun
The visual system summarizes average properties of ensembles of similar objects. We demonstrated an adaptation aftereffect of one such property, mean size, suggesting it is encoded along a single visual dimension (Corbett, et al., 2012), in a similar manner as basic stimulus properties like orientation and direction of motion. To further explore the fundamental nature of ensemble encoding, here we mapped the evolution of mean size adaptation over the course of visually guided grasping. Participants adapted to two sets of dots with different mean sizes. After adaptation, two test dots replaced the adapting sets. Participants first reached to one of these dots, and then judged whether it was larger or smaller than the opposite dot. Grip apertures were inversely dependent on the average dot size of the preceding adapting patch during the early phase of movements, and this aftereffect dissipated as reaches neared the target. Interestingly, perceptual judgments still showed a marked aftereffect, even though they were made after grasping was completed more-or-less veridically. This effect of mean size adaptation on early visually guided kinematics provides novel evidence that mean size is encoded fundamentally in both perception and action domains, and suggests that ensemble statistics not only influence our perceptions of individual objects but can also affect our physical interactions with the external environment. PMID:25383014
Henriques, Denise Y P; Flanders, Martha; Soechting, John F
It is known that visual illusions lead to a distorted perception of the length and orientation of lines, but it is not clear how these illusions affect the appreciation of the shape of closed forms. In this study two experiments were performed to characterize distortions in the visual perception of the shape of quadrilaterals and the extent to which these distortions were similar to the distortions of haptically sensed shapes. In the first experiment human subjects were presented with two quadrilaterals side by side on a computer monitor. One was a reference shape; the other was rotated and distorted relative to the first. The subjects used the computer mouse to adjust the corners of the distorted quadrilateral to match the shape of the target quadrilateral. They made consistent errors on this task: the adjusted quadrilateral was about 2% wider and about 2% shorter than the veridical shape. Furthermore, subjects adjusted the inner angles of the quadrilateral to make them closer to 90 degrees . The first type of error was also present in a second experiment in which, in a two-alternative forced-choice paradigm, subjects viewed a reference shape and were asked to indicate which of two transiently presented quadrilaterals was closest to the target shape. The width/height errors and the inner angle errors were comparable to those described previously when subjects felt the outline of a quadrilateral and then drew its reproduction in the absence of vision, suggesting that the distortion occurs in the process of remembering the shape.
Durgin, Frank H.; Gigone, Krista; Scott, Rebecca
During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Ide, Masakazu; Hidaka, Souta
An input (e.g., airplane takeoff sound) to a sensory modality can suppress the percept of another input (e.g., talking voices of neighbors) of the same modality. This perceptual suppression effect is evidence that neural responses to different inputs closely interact with each other in the brain. While recent studies suggest that close interactions also occur across sensory modalities, crossmodal perceptual suppression effect has not yet been reported. Here, we demonstrate that tactile stimulation can suppress the percept of visual stimuli: Visual orientation discrimination performance was degraded when a tactile vibration was applied to the observer's index finger of hands. We also demonstrated that this tactile suppression effect on visual perception occurred primarily when the tactile and visual information were spatially and temporally consistent. The current findings would indicate that neural signals could closely and directly interact with each other, sufficient to induce the perceptual suppression effect, even across sensory modalities. PMID:24336391
Ide, Masakazu; Hidaka, Souta
An input (e.g., airplane takeoff sound) to a sensory modality can suppress the percept of another input (e.g., talking voices of neighbors) of the same modality. This perceptual suppression effect is evidence that neural responses to different inputs closely interact with each other in the brain. While recent studies suggest that close interactions also occur across sensory modalities, crossmodal perceptual suppression effect has not yet been reported. Here, we demonstrate that tactile stimulation can suppress the percept of visual stimuli: Visual orientation discrimination performance was degraded when a tactile vibration was applied to the observer's index finger of hands. We also demonstrated that this tactile suppression effect on visual perception occurred primarily when the tactile and visual information were spatially and temporally consistent. The current findings would indicate that neural signals could closely and directly interact with each other, sufficient to induce the perceptual suppression effect, even across sensory modalities.
Katkov, Mikhail; Harris, Hila; Sagi, Dov
Our experience with the natural world, as composed of ordered entities, implies that perception captures relationships between image parts. For instance, regularities in the visual scene are rapidly identified by our visual system. Defining the regularities that govern perception is a basic, unresolved issue in neuroscience. Mathematically, perfect regularities are represented by symmetry (perfect order). The transition from ordered configurations to completely random ones has been extensively studied in statistical physics, where the amount of order is characterized by a symmetry-specific order parameter. Here we applied tools from statistical physics to study order detection in humans. Different sets of visual textures, parameterized by the thermodynamic temperature in the Boltzmann distribution, were designed. We investigated how much order is required in a visual texture for it to be discriminated from random noise. The performance of human observers was compared to Ideal and Order observers (based on the order parameter). The results indicated a high consistency in performance across human observers, much below that of the Ideal observer, but well-approximated by the Order observer. Overall, we provide a novel quantitative paradigm to address order perception. Our findings, based on this paradigm, suggest that the statistical physics formalism of order captures regularities to which the human visual system is sensitive. An additional analysis revealed that some order perception properties are captured by traditional texture discrimination models according to which discrimination is based on integrated energy within maps of oriented linear filters. PMID:26113826
Bernstein, Lynne E.; Liebenthal, Einat
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611
Keetels, Mirjam; Stekelenburg, Jeroen J
Visual perception can be changed by co-occurring input from other sensory modalities. Here, we explored how self-generated finger movements (left-right or up-down key presses) affect visual motion perception. In Experiment 1, motion perception of a blinking bar was shifted in the direction of co-occurring hand motor movements, indicative of motor-induced visual motion (MIVM). In Experiment 2, moving and static blinking bars were combined with either directional moving or stationary hand motor movements. Results showed that the directional component in the hand movement was crucial for MIVM as stationary motor movements even declined visual motion perception. In Experiment 3, the role of response bias was excluded in a two-alternative forced-choice task that ruled out the effect of response strategies. All three experiments demonstrated that alternating key presses (either horizontally or vertically aligned) induce illusory visual motion and that stationary motor movements (without a vertical or horizontal direction) induce the opposite effect, namely a decline in visual motion (more static) perception.
ANALYSIS The Visual Simulation Systems Analysis was conducted by means of on-site inspections of the ASPT simulator and a Primary Instrument Trainer at...acceptance testing of a CIG system at Luke AFB, which included aerial refueling. During the analysis of the ASPT system we witnessed a low level weapon...of being presented. Although the ASPT system at Williams AFB provides only a monochrome display, full color CIG systems are in operation at various
Erle, Thorsten M; Reber, Rolf; Topolinski, Sascha
Can affect be evoked by mere perception? Earlier work on processing fluency, which manipulated the dynamics of a running perceptual process, has shown that efficient processing can indeed trigger positive affect. The present work introduces a novel route by not manipulating the dynamics of an ongoing perceptual process, but by blocking or allowing the whole process in the first place. We used illusory contour perception as one very basic such process. In 5 experiments (total N = 422), participants briefly (≤100 ms) viewed stimuli that either allowed illusory contour perception, so-called Kanizsa shapes, or proximally identical control shapes that did not allow for this process to occur. Self-reported preference ratings (Experiments 1, 2, and 4) and facial muscle activity (Experiment 3) showed that participants consistently preferred Kanizsa over these control shapes. Moreover, even within Kanizsa shapes, those that most likely instigated illusory contour perception (i.e., those with the highest support ratio) were liked the most (Experiment 5). At the same time, Kanizsa stimuli with high support ratios were objectively and subjectively the most complex, rendering a processing fluency explanation of this preference unlikely. These findings inform theorizing in perception about affective properties of early perceptual processes that are independent from perceptual fluency and research on affect about the importance of basic perception as a source of affectivity. (PsycINFO Database Record
Vernon, Magdalen D., Comp.
This annotated bibliography on visual perception and its relation to reading is composed of 55 citations ranging in date from 1952 to 1965. Its divisions include Perception of Shape by Young Children, Perception of Words by Children, Perception in Backward Readers, and Perception of Shapes, Letters, and Words by Adults. Listings which include…
Niemeyer, Greg O
Human vision is a product of both physiological and cultural dispositions. This cultural study investigates the role of cultural dispositions in visual perception. In particular, the study focuses on the role of stereotypes, which are involved in recognition. I propose that stereotypes are essential for basic functions of perception and human perception. However, stereotypes also introduce significant limitations on human experience. The fact that stereotypes are abstract simplifications of realities is not the limiting factor, since scientific and cultural progress continually refines stereotypes. The very principle of the stereotype appears to introduce the limitation, because the process of forming stereotypes requires both temporal and functional fragmentations of the continuum of our perception. This fragmentation can be a cause of sensory overload, a postmodern condition that generates cultural, perceptual and behavioral problems. To address this problem, I propose a cultural modification to our modality of perception. The modification shifts the emphasis of our perception from the recognition of stereotypes to the recognition of flows, processes and durations. References to the work of Henri Bergson and Martin Heidegger provide the philosophical basis for this modification and several empirical and experimental examples illustrate such modifications in practice.
Massengale, Samantha; Folden, Donna; McConnell, Pima; Stratton, Laurie; Whitehead, Victoria
The purpose of this study was to determine to what extent visual perception, visual function, cognition, and personality traits affect power wheelchair use in adults. It also proposes to establish baseline information to help clinicians determine or predict power wheelchair driving performance and to develop service plans to address those driving skills that need improvement or compensation. Sixty-two adult power wheelchair users were recruited. Standardized instruments were used to evaluate visual perceptual skills, visual function, cognitive skills, and personality traits. The results of these evaluations were then correlated with participants' scores on a power wheelchair performance test. Strong correlations were found between power wheelchair driving performance and visual perception (p = .000), ocular motor function (p = .000 and p < or = .001), stereodepth perception (p < or = .001), and alertness to the environment (p < or = .001). No significant correlations were found between personality traits and power wheelchair driving performance. These results indicate that good visual perceptual skills, visual function, and various aspects of cognition are necessary for proficient power wheelchair use. These data will assist clinicians in identifying significant factors to consider when evaluating and training clients for power wheelchair use.
Bruno, Aurelio; Cicchini, Guido Marco
The proposal that the processing of visual time might rely on a network of distributed mechanisms that are vision-specific and timescale-specific stands in contrast to the classical view of time perception as the product of a single supramodal clock. Evidence showing that some of these mechanisms have a sensory component that can be locally adapted is at odds with another traditional assumption, namely that time is completely divorced from space. Recent evidence suggests that multiple timing mechanisms exist across and within sensory modalities and that they operate in various neural regions. The current review summarizes this evidence and frames it into the broader scope of models for time perception in the visual domain. PMID:28018946
Alm, Magnus; Behne, Dawn
Gender and age have been found to affect adults' audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years) and middle-aged adults (50-60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females' general AV perceptual strategy. Although young females' speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses.
Scott-Samuel, Nicholas E.; Baddeley, Roland; Palmer, Chloe E.; Cuthill, Innes C.
Movement is the enemy of camouflage: most attempts at concealment are disrupted by motion of the target. Faced with this problem, navies in both World Wars in the twentieth century painted their warships with high contrast geometric patterns: so-called “dazzle camouflage”. Rather than attempting to hide individual units, it was claimed that this patterning would disrupt the perception of their range, heading, size, shape and speed, and hence reduce losses from, in particular, torpedo attacks by submarines. Similar arguments had been advanced earlier for biological camouflage. Whilst there are good reasons to believe that most of these perceptual distortions may have occurred, there is no evidence for the last claim: changing perceived speed. Here we show that dazzle patterns can distort speed perception, and that this effect is greatest at high speeds. The effect should obtain in predators launching ballistic attacks against rapidly moving prey, or modern, low-tech battlefields where handheld weapons are fired from short ranges against moving vehicles. In the latter case, we demonstrate that in a typical situation involving an RPG7 attack on a Land Rover the reduction in perceived speed is sufficient to make the grenade miss where it was aimed by about a metre, which could be the difference between survival or not for the occupants of the vehicle. PMID:21673797
Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur
This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.
Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D
Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.
Koenig, D E; Hart, N W; Hofer, H J
Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer et al., 2012). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon.
Butler, Pamela D.; Silverstein, Steven M.; Dakin, Steven C.
Much work in the cognitive neuroscience of schizophrenia has focused on attention, memory, and executive functioning. To date, less work has focused on perceptual processing. However, perceptual functions are frequently disrupted in schizophrenia, and thus this domain has been included in the CNTRICS (Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia) project. In this article, we describe the basic science presentation and the breakout group discussion on the topic of perception from the first CNTRICS meeting, held in Bethesda, Maryland on February 26 and 27, 2007. The importance of perceptual dysfunction in schizophrenia, the nature of perceptual abnormalities in this disorder, and the critical need to develop perceptual tests appropriate for future clinical trials were discussed. Although deficits are also seen in auditory, olfactory, and somatosensory processing in schizophrenia, the first CNTRICS meeting focused on visual processing deficits. Key concepts of gain control and integration in visual perception were introduced. Definitions and examples of these concepts are provided in this article. Use of visual gain control and integration fit a number of the criteria suggested by the CNTRICS committee, provide fundamental constructs for understanding the visual system in schizophrenia, and are inclusive of both lower-level and higher-level perceptual deficits. PMID:18549875
Clement, Bart Richard
Although speech perception has been considered a predominantly auditory phenomenon, large benefits from vision in degraded acoustic conditions suggest integration of audition and vision. More direct evidence of this comes from studies of audiovisual disparity that demonstrate vision can bias and even dominate perception (McGurk & MacDonald, 1976). It has been observed that hearing-impaired listeners demonstrate more visual biasing than normally hearing listeners (Walden et al., 1990). It is argued here that stimulus audibility must be equated across groups before true differences can be established. In the present investigation, effects of visual biasing on perception were examined as audibility was degraded for 12 young normally hearing listeners. Biasing was determined by quantifying the degree to which listener identification functions for a single synthetic auditory /ba-da-ga/ continuum changed across two conditions: (1)an auditory-only listening condition; and (2)an auditory-visual condition in which every item of the continuum was synchronized with visual articulations of the consonant-vowel (CV) tokens /ba/ and /ga/, as spoken by each of two talkers. Audibility was altered by presenting the conditions in quiet and in noise at each of three signal-to- noise (S/N) ratios. For the visual-/ba/ context, large effects of audibility were found. As audibility decreased, visual biasing increased. A large talker effect also was found, with one talker eliciting more biasing than the other. An independent lipreading measure demonstrated that this talker was more visually intelligible than the other. For the visual-/ga/ context, audibility and talker effects were less robust, possibly obscured by strong listener effects, which were characterized by marked differences in perceptual processing patterns among participants. Some demonstrated substantial biasing whereas others demonstrated little, indicating a strong reliance on audition even in severely degraded acoustic
Graham, Daniel J; Redies, Christoph
Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study.
Zadra, Jonathan R.; Clore, Gerald L.
Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565
Huang, Thomas S.; Zeng, Zhihong
Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.
Buldu, Mehmet; Shaban, Mohamed S.
This study portrayed a picture of kindergarten through 3rd-grade teachers who teach visual arts, their perceptions of the value of visual arts, their visual arts teaching practices, visual arts experiences provided to young learners in school, and major factors and/or influences that affect their teaching of visual arts. The sample for this study…
Geldof, C J A; van Wassenaer, A G; de Kieviet, J F; Kok, J H; Oosterlaan, J
A range of neurobehavioral impairments, including impaired visual perception and visual-motor integration, are found in very preterm born children, but reported findings show great variability. We aimed to aggregate the existing literature using meta-analysis, in order to provide robust estimates of the effect of very preterm birth on visual perceptive and visual-motor integration abilities. Very preterm born children showed deficits in visual-spatial abilities (medium to large effect sizes) but not in visual closure perception. Tests reporting broad visual perceptive indices showed inconclusive results. In addition, impaired visual-motor integration was found (medium effect size), particularly in boys compared to girls. The observed visual-spatial and visual-motor integration deficits may arise from affected occipital-parietal-frontal neural circuitries.
Moro, Valentina; Berlucchi, Giovanni; Lerch, Jason; Tomaiuolo, Francesco; Aglioti, Salvatore M
There is a vigorous debate as to whether visual perception and imagery share the same neuronal networks, whether the primary visual cortex is necessarily involved in visual imagery, and whether visual imagery functions are lateralized in the brain. Two patients with brain damage from closed head injury were submitted to tests of mental imagery in the visual, tactile, auditory, gustatory, olfactory and motor domains, as well as to an extensive testing of cognitive functions. A computerized mapping procedure was used to localize the site and to assess the extent of the lesions. One patient showed pure visual mental imagery deficits in the absence of imagery deficits in other sensory domains as well as in the motor domain, while the other patient showed both visual and tactile imagery deficits. Perceptual, language, and memory deficits were conspicuously absent. Computerized analysis of the lesions showed a massive involvement of the left temporal lobe in both patients and a bilateral parietal lesion in one patient. In both patients the calcarine cortex with the primary visual area was bilaterally intact. Our study indicates that: (i) visual imagery deficits can occur independently from deficits of visual perception; (ii) visual imagery deficits can occur when the primary visual cortex is intact and (iii) the left temporal lobe plays an important role in visual mental imagery.
Marchant, Jennifer L; Driver, Jon
Understanding how the brain extracts and combines temporal structure (rhythm) information from events presented to different senses remains unresolved. Many neuroimaging beat perception studies have focused on the auditory domain and show the presence of a highly regular beat (isochrony) in "auditory" stimulus streams enhances neural responses in a distributed brain network and affects perceptual performance. Here, we acquired functional magnetic resonance imaging (fMRI) measurements of brain activity while healthy human participants performed a visual task on isochronous versus randomly timed "visual" streams, with or without concurrent task-irrelevant sounds. We found that visual detection of higher intensity oddball targets was better for isochronous than randomly timed streams, extending previous auditory findings to vision. The impact of isochrony on visual target sensitivity correlated positively with fMRI signal changes not only in visual cortex but also in auditory sensory cortex during audiovisual presentations. Visual isochrony activated a similar timing-related brain network to that previously found primarily in auditory beat perception work. Finally, activity in multisensory left posterior superior temporal sulcus increased specifically during concurrent isochronous audiovisual presentations. These results indicate that regular isochronous timing can modulate visual processing and this can also involve multisensory audiovisual brain mechanisms.
Nagata, Takashi; Arikawa, Kentaro; Terakita, Akihisa
Absorption spectra of visual pigments are adaptively tuned to optimize informational capacity in most visual systems. Our recent investigation of the eyes of the jumping spider reveals an apparent exception: the absorption characteristics of a visual pigment cause defocusing of the image, reducing visual acuity generally in a part of the retina. However, the amount of defocus can theoretically provide a quantitative indication of the distance of an object. Therefore, we proposed a novel mechanism for depth perception in jumping spiders based on image defocus. Behavioral experiments revealed that the depth perception of the spider depended on the wavelength of the ambient light, which affects the amount of defocus because of chromatic aberration of the lens. This wavelength effect on depth perception was in close agreement with theoretical predictions based on our hypothesis. These data strongly support the hypothesis that the depth perception mechanism of jumping spiders is based on image defocus.
Gilaie-Dotan, Sharon; Saygin, Ayse P; Lorenzi, Lauren J; Egan, Ryan; Rees, Geraint; Behrmann, Marlene
Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral 'form' (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion.
Teraoka, Ryo; Teramoto, Wataru
It has recently been demonstrated that the brain rapidly forms an association between concurrently presented sound sequences and visual motion. Once this association has been formed, the associated sound sequence can drive visual motion perception. This phenomenon is known as "sound-contingent visual motion perception" (SCVM). In the present study, we addressed the possibility of a similar association involving touch instead of audition. In a 9-min exposure session, two circles placed side by side were alternately presented to produce apparent motion in a horizontal direction. The onsets of the circle presentations were synchronized with vibrotactile stimulation on two different positions of the forearm. We then quantified pre- and post-exposure perceptual changes using a motion-nulling procedure. Results showed that after prolonged exposure to visuotactile stimuli, the tactile sequence influenced visual motion perception. Notably, this effect was specific to the previously exposed visual field, thus ruling out the possibility of simple response bias. These findings suggest that SCVM-like associations occur, at least to some extent, for the other modality combinations. Furthermore, the effect did not occur when the forearm posture was changed between the exposure and test phases, suggesting that the association is formed after integrating proprioceptive information.
Night vision goggles are head-mounted, unity-power systems designed to allow the human operator to see and operate at night. Field experience and experimental studies have revealed many drawbacks in conventional designs that impair performance. One major drawback is the poor space perception provided by the goggles. The Hadani et al. [J. Opt. Soc. Am. 70, 60-65 (1980)] model for space perception attributes this drawback to the fact that the conventional designs shift the observer's effective center of perspective approximately 15 cm ahead and also predicts the resulting impairments. An innovative redesign is presented in this paper-the corneal lens goggles (CLG)-which brings the effective center of perspective of the goggles to coincide with the center of perspective of the eyes, thus annulling the optical length of the device. Qualitative and quantitative laboratory studies have compared the performance of the CLG and conventional goggles (type AN/PVS-5). These studies have revealed better visual and visual-motor performance with the CLG. The implications to optical design of the Hadani et al. theory and the CLG concept are discussed.
Hijazi, Mona Mohamed Kamal
Attention and visual perception are important in fencing, as they affect the levels of performance and achievement in fencers. This study identifies the levels of attention and visual perception among male and female fencers and the relationship between attention and visual perception dimensions and the sport performance in fencing. The researcher employed a descriptive method in a sample of 16 fencers during the 2010/2011 season. The sample was comprised of eight males and eight females who participated in the 11-year stage of the Cairo Championships. The Test of Attentional and Interpersonal Style, which was designed by Nideffer and translated by Allawi (1998) was applied. The test consisted of 59 statements that measured seven dimensions. The Test of Visual Perception Skills designed by Alsmadune (2005), which includes seven dimensions was also used. Among females, a positive and statistically significant correlation between the achievement level and Visual Discrimination, Visual-Spatial Relationships, Visual Sequential Memory, Narrow Attentional Focus and Information Processing was observed, while among males, there was a positive and statistically significant correlation between the achievement level and Visual Discrimination, Visual Sequential Memory, Broad External Attentional Focus and Information Processing. For both males and females, a positive and statistically significant correlation between achievement level and Visual Discrimination, Visual Sequential Memory, Broad External Attentional, Narrow Attentional Focus and Information Processing was found. There were statistically significant differences between males and females in Visual Discrimination and Visual-Form Constancy.
Hijazi, Mona Mohamed Kamal
Attention and visual perception are important in fencing, as they affect the levels of performance and achievement in fencers. This study identifies the levels of attention and visual perception among male and female fencers and the relationship between attention and visual perception dimensions and the sport performance in fencing. The researcher employed a descriptive method in a sample of 16 fencers during the 2010/2011 season. The sample was comprised of eight males and eight females who participated in the 11-year stage of the Cairo Championships. The Test of Attentional and Interpersonal Style, which was designed by Nideffer and translated by Allawi (1998) was applied. The test consisted of 59 statements that measured seven dimensions. The Test of Visual Perception Skills designed by Alsmadune (2005), which includes seven dimensions was also used. Among females, a positive and statistically significant correlation between the achievement level and Visual Discrimination, Visual-Spatial Relationships, Visual Sequential Memory, Narrow Attentional Focus and Information Processing was observed, while among males, there was a positive and statistically significant correlation between the achievement level and Visual Discrimination, Visual Sequential Memory, Broad External Attentional Focus and Information Processing. For both males and females, a positive and statistically significant correlation between achievement level and Visual Discrimination, Visual Sequential Memory, Broad External Attentional, Narrow Attentional Focus and Information Processing was found. There were statistically significant differences between males and females in Visual Discrimination and Visual-Form Constancy. PMID:24511355
Thompson, P.; Stone, L. S.
We have previously shown that contrast affects speed perception, with lower-contrast, drifting gratings perceived as moving slower. In a recent study, we examined the implications of this result on models of speed perception that use the amplitude of the response of linear spatio-temporal filters to determine speed. In this study, we investigate whether the contrast dependence of speed can be understood within the context of models in which speed estimation is made using the temporal frequency of the response of linear spatio-temporal filters. We measured the effect of contrast on flicker perception and found that contrast manipulations produce opposite effects on perceived drift rate and perceived flicker rate, i.e., reducing contrast increases the apparent temporal frequency of counterphase modulated gratings. This finding argues that, if a temporal frequency-based algorithm underlies speed perception, either flicker and speed perception must not be based on the output of the same mechanism or contrast effects on perceived spatial frequency reconcile the disparate effects observed for perceived temporal frequency and speed.
Devlin, Joseph T; Price, Cathy J
Medial temporal lobe (MTL) structures including the hippocampus, entorhinal cortex, and perirhinal cortex are thought to be part of a unitary system dedicated to memory [1, 2], although recent studies suggest that at least one component-perirhinal cortex-might also contribute to perceptual processing [3, 4, 5, 6]. To date, the strongest evidence for this comes from animal lesion studies [7, 8, 9, 10, 11, 12, 13, 14]. In contrast, the findings from human patients with naturally occurring MTL lesions are less clear and suggest a possible functional difference between species [15, 16, 17, 18, 19, 20]. Here, both these issues were addressed with functional neuroimaging in healthy volunteers performing a perceptual discrimination task originally developed for monkeys . This revealed perirhinal activation when the task required the integration of visual features into a view-invariant representation but not when it could be accomplished on the basis of simple features (e.g., color and shape). This activation pattern matched lateral inferotemporal regions classically associated with visual processing but differed from entorhinal cortex associated with memory encoding. The results demonstrate a specific role for the perirhinal cortex in visual perception and establish a functional homology for perirhinal cortex between species, although we propose that in humans, the region contributes to a wider behavioral repertoire including mnemonic, perceptual, and linguistic processes.
Hubbard, Timothy L.
White (2012) proposed that kinematic features in a visual percept are matched to stored representations containing information regarding forces (based on prior haptic experience) and that information in the matched, stored representations regarding forces is then incorporated into visual perception. Although some elements of White's (2012) account…
Schwarzkopf, D. Samuel; Lutti, Antoine; Li, Baojuan; Kanai, Ryota; Rees, Geraint
Visual perception depends strongly on spatial context. A classic example is the tilt illusion where the perceived orientation of a central stimulus differs from its physical orientation when surrounded by tilted spatial contexts. Here we show that such contextual modulation of orientation perception exhibits trait-like interindividual diversity that correlates with interindividual differences in effective connectivity within human primary visual cortex. We found that the degree to which spatial contexts induced illusory orientation perception, namely, the magnitude of the tilt illusion, varied across healthy human adults in a trait-like fashion independent of stimulus size or contrast. Parallel to contextual modulation of orientation perception, the presence of spatial contexts affected effective connectivity within human primary visual cortex between peripheral and foveal representations that responded to spatial context and central stimulus, respectively. Importantly, this effective connectivity from peripheral to foveal primary visual cortex correlated with interindividual differences in the magnitude of the tilt illusion. Moreover, this correlation with illusion perception was observed for effective connectivity under tilted contextual stimulation but not for that under iso-oriented contextual stimulation, suggesting that it reflected the impact of orientation-dependent intra-areal connections. Our findings revealed an interindividual correlation between intra-areal connectivity within primary visual cortex and contextual influence on orientation perception. This neurophysiological-perceptual link provides empirical evidence for theoretical proposals that intra-areal connections in early visual cortices are involved in contextual modulation of visual perception. PMID:24285885
Ricciardi, Emiliano; Basso, Demis; Sani, Lorenzo; Bonino, Daniela; Vecchi, Tomaso; Pietrini, Pietro; Miniussi, Carlo
The visual motion-responsive middle temporal complex (hMT+) is activated during tactile and aural motion discrimination in both sighted and congenitally blind individuals, suggesting a supramodal organization of this area. Specifically, non-visual motion processing has been found to activate the more anterior portion of the hMT+. In the present study, repetitive transcranial magnetic stimulation (rTMS) was used to determine whether this more anterior portion of hMT+ truly plays a functional role in tactile motion processing. Sixteen blindfolded, young, healthy volunteers were asked to detect changes in the rotation velocity of a random Braille-like dot pattern by using the index or middle finger of their right hand. rTMS was applied for 600 ms (10 Hz, 110% motor threshold), 200 ms after the stimulus onset with a figure-of-eight coil over either the anterior portion of hMT+ or a midline parieto-occipital site (as a control). Accuracy and reaction times were significantly impaired only when TMS was applied on hMT+, but not on the control area. These results indicate that the recruitment of hMT+ is necessary for tactile motion processing, and thus corroborate the hypothesis of a 'supramodal' functional organization for this sensory motion processing area.
Myowa-Yamakoshi, Masako; Kawakita, Yuka; Okanda, Mako; Takeshita, Hideko
In the present study, we investigated whether infants' own visual experiences affected their perception of the visual status of others engaging in goal-directed actions. In Experiment 1, infants viewed video clips of successful and failed goal-directed actions performed by a blindfolded adult, with half the infants having previously experienced…
Howe, Catherine Q; Beau Lotto, R; Purves, Dale
Much current vision research is predicated on the idea--and a rapidly growing body of evidence--that visual percepts are generated according to the empirical significance of light stimuli rather than their physical characteristics. As a result, an increasing number of investigators have asked how visual perception can be rationalized in these terms. Here, we compare two different theoretical frameworks for predicting what observers actually see in response to visual stimuli: Bayesian decision theory and empirical ranking theory. Deciding which of these approaches has greater merit is likely to determine how the statistical operations that apparently underlie visual perception are eventually understood.
Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia
The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception. PMID:23755186
Lugo, J E; Doti, R; Faubert, J
The fulcrum principle establishes that a subthreshold excitatory signal (entering in one sense) that is synchronous with a facilitation signal (entering in a different sense) can be increased (up to a resonant-like level) and then decreased by the energy and frequency content of the facilitating signal. As a result, the sensation of the signal changes according to the excitatory signal strength. In this context, the sensitivity transitions represent the change from subthreshold activity to a firing activity in multisensory neurons. Initially the energy of their activity (supplied by the weak signals) is not enough to be detected but when the facilitating signal enters the brain, it generates a general activation among multisensory neurons, modifying their original activity. In our opinion, the result is an integrated activation that promotes sensitivity transitions and the signals are then perceived. In other words, the activity created by the interaction of the excitatory signal (e.g., visual) and the facilitating signal (tactile noise) at some specific energy, produces the capability for a central detection of an otherwise weak signal. In this work we investigate the effect of an effective tactile noise on visual perception. Specifically we show that tactile noise is capable of decreasing luminance modulated thresholds.
The general concept of volumic view (VV) as a universal property of space is introduced. VV exists in every point of the universe where electromagnetic (EM) waves can reach and a point or a quasi-point receiver (detector) of EM waves can be placed. Classification of receivers is given for the first time. They are classified into three main categories: biological, man-made non-biological, and mathematically specified hypothetical receivers. The principally novel concept of volumic perception is introduced. It differs chiefly from the traditional concept which traces back to Euclid and pre-Euclidean times and much later to Leonardo da Vinci and Giovanni Battista della Porta's discoveries and practical stereoscopy as introduced by C. Wheatstone. The basic idea of novel concept is that humans and animals acquire volumic visual data flows in series rather than in parallel. In this case the brain is free from extremely sophisticated real time parallel processing of two volumic visual data flows in order to combine them. Such procedure seems hardly probable even for humans who are unable to combine two primitive static stereoscopic images in one quicker than in a few seconds. Some people are unable to perform this procedure at all.
We designs and implement an instrumental methodology of analysis of the pupillary response to chromatic stimuli in order to observe the changes of pupillary area in the process of contraction and dilation in diabetic patients. Visual stimuli were used in the visible spectrum (400nm-650nm). Three different programs were used to determinate the best stimulation in order to obtain the better and contrasted pupillary response for diagnosis of the visual perception of colors. The stimulators PG0, PG12 and PG20 were designed in our laboratory. The test was carried out with 44 people, 33 men, 10 women and a boy (22-52 and 6 years), 12 with the stimulator PG0, 21 with PG12 and 17 with PG20, 7 subjects participated in more than a test. According to the plates of Ishihara, 40 of those subjects have normal vision to the colors, one subject suffers dicromasy (inability to differ or to perceive red and green) and while three of them present deficiencies to observe the blue and red spectrum (they suffer type II diabetes mellitus). With this instrumental methodology, we pretend to obtain an indication in the pupillary variability for the early diagnose of the diabetes mellitus, as well as a monitoring instrument for it.
Pearson, Brianna; Snell, Sam; Bye-Nagel, Kyri; Tonidandel, Scott; Heyer, Laurie J; Campbell, A Malcolm
Members of the synthetic biology community have discussed the significance of word selection when describing synthetic biology to the general public. In particular, many leaders proposed the word "create" was laden with negative connotations. We found that word choice and framing does affect public perception of synthetic biology. In a controlled experiment, participants perceived synthetic biology more negatively when "create" was used to describe the field compared to "construct" (p = 0.008). Contrary to popular opinion among synthetic biologists, however, low religiosity individuals were more influenced negatively by the framing manipulation than high religiosity people. Our results suggest that synthetic biologists directly influence public perception of their field through avoidance of the word "create".
Members of the synthetic biology community have discussed the significance of word selection when describing synthetic biology to the general public. In particular, many leaders proposed the word "create" was laden with negative connotations. We found that word choice and framing does affect public perception of synthetic biology. In a controlled experiment, participants perceived synthetic biology more negatively when "create" was used to describe the field compared to "construct" (p = 0.008). Contrary to popular opinion among synthetic biologists, however, low religiosity individuals were more influenced negatively by the framing manipulation than high religiosity people. Our results suggest that synthetic biologists directly influence public perception of their field through avoidance of the word "create". PMID:21777466
Kuzmanovic, Bojana; Jefferson, Anneli; Bente, Gary; Vogeley, Kai
Interpersonal impression formation is highly consequential for social interactions in private and public domains. These perceptions of others rely on different sources of information and processing mechanisms, all of which have been investigated in independent research fields. In social psychology, inferences about states and traits of others as well as activations of semantic categories and corresponding stereotypes have attracted great interest. On the other hand, research on emotion and reward demonstrated affective and motivational influences of social cues on the observer, which in turn modulate attention, categorization, evaluation, and decision processes. While inferential and categorical social processes have been shown to recruit a network of cortical brain regions associated with mentalizing and evaluation, the affective influence of social cues has been linked to subcortical areas that play a central role in detection of salient sensory input and reward processing. In order to extend existing integrative approaches to person perception, both the inferential-categorical processing of information about others, and affective and motivational influences of this information on the beholder should be taken into account. PMID:23781188
Liu, Pengyu; Jia, Kebin
Different visual perception characteristic saliencies are the key to constitute the low-complexity video coding framework. A hierarchical video coding scheme based on human visual systems (HVS) is proposed in this paper. The proposed scheme uses a joint video coding framework consisting of visual perception analysis layer (VPAL) and video coding layer (VCL). In VPAL, effective visual perception characteristics detection algorithm is proposed to achieve visual region of interest (VROI) based on the correlation between coding information (such as motion vector, prediction mode, etc.) and visual attention. Then, the interest priority setting for VROI according to visual perception characteristics is completed. In VCL, the optional encoding method is developed utilizing the visual interested priority setting results from VPAL. As a result, the proposed scheme achieves information reuse and complementary between visual perception analysis and video coding. Experimental results show that the proposed hierarchical video coding scheme effectively alleviates the contradiction between complexity and accuracy. Compared with H.264/AVC (JM17.0), the proposed scheme reduces 80% video coding time approximately and maintains a good video image quality as well. It improves video coding performance significantly. PMID:24959623
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
Guo, Xiaoying; Asano, Chie Muraki; Asano, Akira; Kurita, Takio; Li, Liang
In our previous work we determined that five important characteristics affect the perception of visual complexity of a texture: regularity, roughness, directionality, density, and understandability. In this paper, a set of objective methods for measuring these characteristics is proposed: regularity is estimated by an autocorrelation function; roughness is computed based on local changes; directionality is measured by the maximum line-likeness of edges in different directions; and density is calculated from the edge density. Our analysis shows a significant correlation between the objective measures and subjective evaluations. In addition, for the estimation of understandability, a new approach is proposed. We asked the respondents to name each texture, and then we sorted all these names into different types, including names that were similar. We discovered that understandability is affected by two factors of a texture: the maximum number of similar names assigned to a specific type and the total number of types.
Méary, David; Chary, Catherine; Palluel-Germain, Richard; Orliaguet, Jean-Pierre
Studies of movement production have shown that the relationship between the amplitude of a movement and its duration varies according to the type of gesture. In the case of pointing movements the duration increases as a function of distance and width of the target (Fitts' law), whereas for writing movements the duration tends to remain constant across changes in trajectory length (isochrony principle). We compared the visual perception of these two categories of movement. The participants judged the speed of a light spot that portrayed the motion of the end-point of a hand-held pen (pointing or writing). For the two types of gesture we used 8 stimulus sizes (from 2.5 cm to 20 cm) and 32 durations (from 0.2 s to 1.75 s). Viewing each combination of size and duration, participants had to indicate whether the movement speed seemed "fast", "slow", or "correct". Results showed that the participants' perceptual preferences were in agreement with the rules of movement production. The stimulus size was more influential in the pointing condition than in the writing condition. We consider that this finding reflects the influence of common representational resources for perceptual judgment and movement production.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit.
Yuasa, Kenichi; Yotsumoto, Yuko
When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems. PMID:26292285
Yuasa, Kenichi; Yotsumoto, Yuko
When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.
Harm, Deborah L.; Reschke, Millard R.; Parker, Donald E.
Self-orientation and self/surround-motion perception derive from a multimodal sensory process that integrates information from the eyes, vestibular apparatus, proprioceptive and somatosensory receptors. Results from short and long duration spaceflight investigations indicate that: (1) perceptual and sensorimotor function was disrupted during the initial exposure to microgravity and gradually improved over hours to days (individuals adapt), (2) the presence and/or absence of information from different sensory modalities differentially affected the perception of orientation, self-motion and surround-motion, (3) perceptual and sensorimotor function was initially disrupted upon return to Earth-normal gravity and gradually recovered to preflight levels (individuals readapt), and (4) the longer the exposure to microgravity, the more complete the adaptation, the more profound the postflight disturbances, and the longer the recovery period to preflight levels. While much has been learned about perceptual and sensorimotor reactions and adaptation to microgravity, there is much remaining to be learned about the mechanisms underlying the adaptive changes, and about how intersensory interactions affect perceptual and sensorimotor function during voluntary movements. During space flight, SMS and perceptual disturbances have led to reductions in performance efficiency and sense of well-being. During entry and immediately after landing, such disturbances could have a serious impact on the ability of the commander to land the Orbiter and on the ability of all crew members to egress from the Orbiter, particularly in a non-nominal condition or following extended stays in microgravity. An understanding of spatial orientation and motion perception is essential for developing countermeasures for Space Motion Sickness (SMS) and perceptual disturbances during spaceflight and upon return to Earth. Countermeasures for optimal performance in flight and a successful return to Earth require
Coté, Carol A.
This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…
Wilson, Amanda H.; Paré, Martin; Munhall, Kevin G.
Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. Results In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. Conclusions The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze. PMID:27537379
Thilo, Kai V; Gresty, Michael A
Large-field torsional optokinetic stimulation is known to affect the perceived direction of gravity with verticality judgements deviating towards the direction of visual stimulus rotation. The present study aimed to replicate this effect and to examine it further by subjecting participants to optokinetic stimulation in roll, resulting in spontaneous alternations between the perception of object-motion and that of contradirectional self-motion (vection), as reported by the subjects. Simultaneously, subjects were oscillated laterally in a flight simulator and indicated their perception of postural verticality. Results confirmed that rotation of the visual environment in the frontal plane biases the perceived orientation of gravity towards the direction of visual stimulus motion. However, no differential effect of perceptual state on postural verticality was obtained when contrasting verticality judgements made during the perception of object-motion with those obtained during reported self-motion perception. This finding is likely to reflect a functional segregation of central nervous visual-vestibular subsystems that process the perception of self-tilt and that of self-rotation to some degree independently.
Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…
In order to elucidate the role of texture in fish vision, the agonistic behavior of male Siamese fighting fish (Betta splendens) was tested in a response to models composed by means of image processing techniques. Using the models with the contour shape of a side view of Betta splendens in an aggressive state, the responses were vigorous when there was a fine distribution of brightness and naturalistic color, producing textures like a scale pattern. Reactions became weaker as the brightness and color distribution reverted to more homogeneous levels and the scale pattern disappeared. When the artificial models with the circular contour shape were used, models with the scale pattern evoked more aggressive behaviors than those without it, while the existence of spherical gradation affected the behavior slightly. These results suggest that texture plays an important role in fish visual perception.
Rutiku, Renate; Aru, Jaan; Bachmann, Talis
Previous studies have observed different onset times for the neural markers of conscious perception. This variability could be attributed to procedural differences between studies. Here we show that the onset times for the markers of conscious visual perception can strongly vary even within a single study. A heterogeneous stimulus set was presented at threshold contrast. Trials with and without conscious perception were contrasted on 100 balanced subsets of the data. Importantly, the 100 subsets with heterogeneous stimuli did not differ in stimulus content, but only with regard to specific trials used. This approach enabled us to study general markers of conscious visual perception independent of stimulus content, characterize their onset and its variability within one study. N200 and P300 were the two reliable markers of conscious visual perception common to all perceived stimuli and absent for all non-perceived stimuli. The estimated mean onset latency for both markers was shortly after 200 ms. However, the onset latency of these markers was associated with considerable variability depending on which subsets of the data were considered. We show that it is first and foremost the amplitude fluctuation in the condition without conscious perception that explains the observed variability in onset latencies of the markers of conscious visual perception. PMID:26869905
Himmelbach, Marc; Erb, Michael; Klockgether, Thomas; Moskau, Susanna; Karnath, Hans-Otto
The integration of visual elements into global perception seems to be implemented separately to single object perception. This assumption is supported by the existence of patients with simultanagnosia who can identify single objects but are incapable of integrating multiple visual items. We investigated a case of simultanagnosia due to posterior cortical atrophy without structural brain damage who demonstrated an incomplete simultanagnosia. The patient successfully recognized a global stimulus in one trial but failed to do so just a few seconds later. Using event-related fMRI, we contrasted post hoc selected trials of successful global perception with trials of global recognition failure. We found circumscribed clusters of activity at the right and left primary intermediate sulci and a bilateral cluster at the ventral precuneus. The integration of multiple visual elements resulting in a conscious perception of their gestalt seems to rely on these bilateral structures in the human lateral and medial inferior parietal cortex.
This case of visual agnosia is of special interest because of its causation by trauma, of the unusually long follow-up (10 1/2 years), and the evidence for dual deficits of recognition and perception. Although most of the findings were characteristic of associative visual agnosia with preserved perceptual function, the poor copying, contrasted to better spontaneous drawing, suggested apperceptive agnosia as well. Prosopagnosia, alexia without agraphia, Balint's syndrome, visual static agnosia and simultanagnosia were also observed. The patient had persisting amnestic syndrome, but no dementia or aphasia. The responses to visual stimulation were perseverations, form confusions and confabulations. Visual evoked potentials were severely, bilaterally abnormal and computerized tomographic localization showed bilateral lesions also. The stages of recognition are analysed through this case of visual verbal disconnection and the importance of memory in perception is highlighted.
Ciszewski, Słowomir; Wichowicz, Hubert Michał; Żuk, Krzysztof
Visual perception by individuals with schizophrenia has not been extensively researched. The focus of this review is the perception of physiological visual illusions by patients with schizophrenia, a differences of perception reported in a small number of studies. Increased or decreased susceptibility of these patients to various illusions seems to be unconnected to the location of origin in the visual apparatus, which also takes place in illusions connected to other modalities. The susceptibility of patients with schizophrenia to haptic illusions has not yet been investigated, although the need for such investigation has been is clear. The emerging picture is that some individuals with schizophrenia are "resistant" to some of the illusions and are able to assess visual phenomena more "rationally", yet certain illusions (ex. Müller-Lyer's) are perceived more intensely. Disturbances in the perception of visual illusions have neither been classified as possible diagnostic indicators of a dangerous mental condition, nor included in the endophenotype of schizophrenia. Although the relevant data are sparse, the ability to replicate the results is limited, and the research model lacks a "gold standard", some preliminary conclusions may be drawn. There are indications that disturbances in visual perception are connected to the extent of disorganization, poor initial social functioning, poor prognosis, and the types of schizophrenia described as neurodevelopmental. Patients with schizophrenia usually fail to perceive those illusions that require volitional controlled attention, and show lack of sensitivity to the contrast between shape and background.
Sobkow, Agata; Traczyk, Jakub; Zaleskiewicz, Tomasz
Recent research has documented that affect plays a crucial role in risk perception. When no information about numerical risk estimates is available (e.g., probability of loss or magnitude of consequences), people may rely on positive and negative affect toward perceived risk. However, determinants of affective reactions to risks are poorly understood. In a series of three experiments, we addressed the question of whether and to what degree mental imagery eliciting negative affect and stress influences risk perception. In each experiment, participants were instructed to visualize consequences of risk taking and to rate riskiness. In Experiment 1, participants who imagined negative risk consequences reported more negative affect and perceived risk as higher compared to the control condition. In Experiment 2, we found that this effect was driven by affect elicited by mental imagery rather than its vividness and intensity. In this study, imagining positive risk consequences led to lower perceived risk than visualizing negative risk consequences. Finally, we tested the hypothesis that negative affect related to higher perceived risk was caused by negative feelings of stress. In Experiment 3, we introduced risk-irrelevant stress to show that participants in the stress condition rated perceived risk as higher in comparison to the control condition. This experiment showed that higher ratings of perceived risk were influenced by psychological stress. Taken together, our results demonstrate that affect-laden mental imagery dramatically changes risk perception through negative affect (i.e., psychological stress).
Sobkow, Agata; Traczyk, Jakub; Zaleskiewicz, Tomasz
Recent research has documented that affect plays a crucial role in risk perception. When no information about numerical risk estimates is available (e.g., probability of loss or magnitude of consequences), people may rely on positive and negative affect toward perceived risk. However, determinants of affective reactions to risks are poorly understood. In a series of three experiments, we addressed the question of whether and to what degree mental imagery eliciting negative affect and stress influences risk perception. In each experiment, participants were instructed to visualize consequences of risk taking and to rate riskiness. In Experiment 1, participants who imagined negative risk consequences reported more negative affect and perceived risk as higher compared to the control condition. In Experiment 2, we found that this effect was driven by affect elicited by mental imagery rather than its vividness and intensity. In this study, imagining positive risk consequences led to lower perceived risk than visualizing negative risk consequences. Finally, we tested the hypothesis that negative affect related to higher perceived risk was caused by negative feelings of stress. In Experiment 3, we introduced risk-irrelevant stress to show that participants in the stress condition rated perceived risk as higher in comparison to the control condition. This experiment showed that higher ratings of perceived risk were influenced by psychological stress. Taken together, our results demonstrate that affect-laden mental imagery dramatically changes risk perception through negative affect (i.e., psychological stress). PMID:27445901
Yang, Hua; Lu, Jing; Gong, Diankun; Yao, Dezhong
The influence of music on the human brain has continued to attract increasing attention from neuroscientists and musicologists. Currently, tonal music is widely present in people's daily lives; however, atonal music has gradually become an important part of modern music. In this study, we conducted two experiments: the first one tested for differences in perception of distractibility between tonal music and atonal music. The second experiment tested how tonal music and atonal music affect visual working memory by comparing musicians and nonmusicians who were placed in contexts with background tonal music, atonal music, and silence. They were instructed to complete a delay matching memory task. The results show that musicians and nonmusicians have different evaluations of the distractibility of tonal music and atonal music, possibly indicating that long-term training may lead to a higher auditory perception threshold among musicians. For the working memory task, musicians reacted faster than nonmusicians in all background music cases, and musicians took more time to respond in the tonal background music condition than in the other conditions. Therefore, our results suggest that for a visual memory task, background tonal music may occupy more cognitive resources than atonal music or silence for musicians, leaving few resources left for the memory task. Moreover, the musicians outperformed the nonmusicians because of the higher sensitivity to background music, which also needs a further longitudinal study to be confirmed.
O'Donnell, L. M.; Smith, A. J.
This article describes the physiological mechanisms involved in three-dimensional depth perception and presents a variety of distance and depth cues and strategies for detecting and estimating curbs and steps for individuals with impaired vision. (Author/DB)
Dodd, Barbara; McIntosh, Beth; Erdener, Dogu; Burnham, Denis
An example of the auditory-visual illusion in speech perception, first described by McGurk and MacDonald, is the perception of [ta] when listeners hear [pa] in synchrony with the lip movements for [ka]. One account of the illusion is that lip-read and heard speech are combined in an articulatory code since people who mispronounce words respond…
Kaiser, Mary K.; Sweet, Barbara T.
Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).
To solve network adaptive parameter determination problem of the pulse coupled neural network (PCNN), and improve the image segmentation results in image segmentation. The PCNN adaptive segmentation algorithm based on visual perception of information is proposed. Based on the image information of visual perception and Gabor mathematical model of Optic nerve cells receptive field, the algorithm determines adaptively the receptive field of each pixel of the image. And determines adaptively the network parameters W, M, and β of PCNN by the Gabor mathematical model, which can overcome the problem of traditional PCNN parameter determination in the field of image segmentation. Experimental results show that the proposed algorithm can improve the region connectivity and edge regularity of segmentation image. And also show the PCNN of visual perception information for segmentation image of advantage.
Lee, Sue-Hyun; Kravitz, Dwight J.; Baker, Chris I.
During mental imagery, visual representations can be evoked in the absence of “bottom-up” sensory input. Prior studies have reported similar neural substrates for imagery and perception, but studies of brain-damaged patients have revealed a double dissociation with some patients showing preserved imagery in spite of impaired perception and others vice versa. Here, we used fMRI and multi-voxel pattern analysis to investigate the specificity, distribution, and similarity of information for individual seen and imagined objects to try and resolve this apparent contradiction. In an event-related design, participants either viewed or imagined individual named object images on which they had been trained prior to the scan. We found that the identity of both seen and imagined objects could be decoded from the pattern of activity throughout the ventral visual processing stream. Further, there was enough correspondence between imagery and perception to allow discrimination of individual imagined objects based on the response during perception. However, the distribution of object information across visual areas was strikingly different during imagery and perception. While there was an obvious posterior-anterior gradient along the ventral visual stream for seen objects, there was an opposite gradient for imagined objects. Moreover, the structure of representations (i.e. the pattern of similarity between responses to all objects) was more similar during imagery than perception in all regions along the visual stream. These results suggest that while imagery and perception have similar neural substrates, they involve different network dynamics, resolving the tension between previous imaging and neuropsychological studies. PMID:22040738
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Morita, Toshiya
A great number of studies have suggested a variety of ways to get depth information from two dimensional images such as binocular disparity, shape-from-shading, size gradient/foreshortening, aerial perspective, and so on. Are there any other new factors affecting depth perception? A recent psychophysical study has investigated the correlation between image resolution and depth sensation of Cylinder images (A rectangle contains gradual luminance-contrast changes.). It was reported that higher resolution images facilitate depth perception. However, it is still not clear whether or not the finding generalizes to other kinds of visual stimuli, because there are more appropriate visual stimuli for exploration of depth perception of luminance-contrast changes, such as Gabor patch. Here, we further examined the relationship between image resolution and depth perception by conducting a series of psychophysical experiments with not only Cylinders but also Gabor patches having smoother luminance-contrast gradients. As a result, higher resolution images produced stronger depth sensation with both images. This finding suggests that image resolution affects depth perception of simple luminance-contrast differences (Gabor patch) as well as shape-from-shading (Cylinder). In addition, this phenomenon was found even when the resolution difference was undetectable. This indicates the existence of consciously available and unavailable information in our visual system. These findings further support the view that image resolution is a cue for depth perception that was previously ignored. It partially explains the unparalleled viewing experience of novel high resolution displays.
Madeja, Stanley S.
In the artistic process the artist verifies and exemplifies his or her perceptions and conception of their work. This paper discusses the model of the artistic process which describes the repertoire of perceptual clues that the artist develops. The rationale for the development of the model is for the art teacher to be able to explain in simple…
Iarocci, Grace; Rombough, Adrienne; Yager, Jodi; Weeks, Daniel J; Chua, Romeo
The bimodal perception of speech sounds was examined in children with autism as compared to mental age-matched typically developing (TD) children. A computer task was employed wherein only the mouth region of the face was displayed and children reported what they heard or saw when presented with consonant-vowel sounds in unimodal auditory condition, unimodal visual condition, and a bimodal condition. Children with autism showed less visual influence and more auditory influence on their bimodal speech perception as compared to their TD peers, largely due to significantly worse performance in the unimodal visual condition (lip reading). Children with autism may not benefit to the same extent as TD children from visual cues such as lip reading that typically support the processing of speech sounds. The disadvantage in lip reading may be detrimental when auditory input is degraded, for example in school settings, whereby speakers are communicating in frequently noisy environments.
Dentico, Daniela; Cheung, Bing Leung; Chang, Jui-Yang; Guokas, Jeffrey; Boly, Melanie; Tononi, Giulio; Van Veen, Barry
The role of bottom-up and top-down connections during visual perception and the forming of mental images was examined by analyzing high-density EEG recordings of brain activity using two state-of-the-art methods for assessing the directionality of cortical signal flow: state-space Granger causality and dynamic causal modeling. We quantified the directionality of signal flow in an occipito-parieto-frontal cortical network during perception of movie clips versus mental replay of the movies and free visual imagery. Both Granger causality and dynamic causal modeling analyses revealed increased top-down signal flow in parieto-occipital cortices during mental imagery as compared to visual perception. These results are the first direct demonstration of a reversal of the predominant direction of cortical signal flow during mental imagery as compared to perception. PMID:24910071
Dentico, Daniela; Cheung, Bing Leung; Chang, Jui-Yang; Guokas, Jeffrey; Boly, Melanie; Tononi, Giulio; Van Veen, Barry
The role of bottom-up and top-down connections during visual perception and the formation of mental images was examined by analyzing high-density EEG recordings of brain activity using two state-of-the-art methods for assessing the directionality of cortical signal flow: state-space Granger causality and dynamic causal modeling. We quantified the directionality of signal flow in an occipito-parieto-frontal cortical network during perception of movie clips versus mental replay of the movies and free visual imagery. Both Granger causality and dynamic causal modeling analyses revealed an increased top-down signal flow in parieto-occipital cortices during mental imagery as compared to visual perception. These results are the first direct demonstration of a reversal of the predominant direction of cortical signal flow during mental imagery as compared to perception.
Orticio, L P
1. Delivery of health care/services is influenced by society's perceptions of blindness. 2. Health care professionals may not be equipped to address inevitable blindness because they may not have been taught how. This lack of preparation during training is a need that must be addressed. 3. The challenge to change inaccurate societal perceptions should start with health professionals--especially those who work with fervor to fight blindness.
Wang, Meijian; Wang, Xiuhai; Xue, Lingyan; Huang, Dan; Chen, Yao
Although the allocation of brain functions across the two cerebral hemispheres has aroused public interest over the past century, asymmetric interhemispheric cooperation under attentional modulation has been scarcely investigated. An example of interhemispheric cooperation is visual spatial perception. During this process, visual information from each hemisphere is integrated because each half of the visual field predominantly projects to the contralateral visual cortex. Both egocentric and allocentric coordinates can be employed for visual spatial representation, but they activate different areas in primate cerebral hemispheres. Recent studies have determined that egocentric representation affects the reaction time of allocentric perception; furthermore, this influence is asymmetric between the two visual hemifields. The egocentric-allocentric incompatibility effect and its asymmetry between the two hemispheres can produce this phenomenon. Using an allocentric position judgment task, we found that this incompatibility effect was reduced, and its asymmetry was eliminated on an attentional task rather than a neutral task. Visual attention might activate cortical areas that process conflicting information, such as the anterior cingulate cortex, and balance the asymmetry between the two hemispheres. Attention may enhance and balance this interhemispheric cooperation because this imbalance may also be caused by the asymmetric cooperation of each hemisphere in spatial perception. PMID:26758349
Haber, Ralph N.
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Cook, Laura A; Van Valkenburg, David L; Badcock, David R
The ability to make accurate audiovisual synchrony judgments is affected by the "complexity" of the stimuli: We are much better at making judgments when matching single beeps or flashes as opposed to video recordings of speech or music. In the present study, we investigated whether the predictability of sequences affects whether participants report that auditory and visual sequences appear to be temporally coincident. When we reduced their ability to predict both the next pitch in the sequence and the temporal pattern, we found that participants were increasingly likely to report that the audiovisual sequences were synchronous. However, when we manipulated pitch and temporal predictability independently, the same effect did not occur. By altering the temporal density (items per second) of the sequences, we further determined that the predictability effect occurred only in temporally dense sequences: If the sequences were slow, participants' responses did not change as a function of predictability. We propose that reduced predictability affects synchrony judgments by reducing the effective pitch and temporal acuity in perception of the sequences.
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of
Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane
We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants' VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition.
Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane
We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants’ VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition. PMID:26950210
Jin, Sung-Hee; Boling, Elizabeth
The purpose of this study is to compare an instructional designer's intentions with the learners' perceptions of the instructional functions of visuals in one specific e-learning lesson. An instructional designer created each visual with more than two purposes related to the psychological, cognitive, and affective aspects of learning. Contrary to…
Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc
The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.
Wang, Lei; Kaufman, Arie E
We introduce a lighting system that enhances the visual cues in a rendered image for the perception of 3D volumetric objects. We divide the lighting effects into global and local effects, and deploy three types of directional lights: the key light and accessory lights (fill and detail lights). The key light provides both lighting effects and carries the visual cues for the perception of local and global shapes and depth. The cues for local shapes are conveyed by gradient; those for global shapes are carried by shadows; and those for depth are provided by shadows and translucent objects. Fill lights produce global effects to increase the perceptibility. Detail lights generate local effects to improve the cues for local shapes. Our method quantifies the perception and uses an exhaustive search to set the lights. It configures accessory lights with the consideration of preserving the global impression conveyed by the key light. It ensures the feeling of smooth light movements in animations. With simplification, it achieves interactive frame rates and produces results that are visually indistinguishable from results using the nonsimplified algorithm. The major contributions of this paper are our lighting system, perception measurement and lighting design algorithm with our indistinguishable simplification.
Taylor, Paul Christopher John; Thut, Gregor
Probing brain functions by brain stimulation while simultaneously recording brain activity allows addressing major issues in cognitive neuroscience. We review recent studies where electroencephalography (EEG) has been combined with transcranial magnetic stimulation (TMS) in order to investigate possible neuronal substrates of visual perception and attention. TMS-EEG has been used to study both pre-stimulus brain activity patterns that affect upcoming perception, and also the stimulus-evoked and task-related inter-regional interactions within the extended visual-attentional network from which attention and perception emerge. Local processes in visual areas have been probed by directly stimulating occipital cortex while monitoring EEG activity and perception. Interactions within the attention network have been probed by concurrently stimulating frontal or parietal areas. The use of tasks manipulating implicit and explicit memory has revealed in addition a role for attentional processes in memory. Taken together, these studies helped to reveal that visual selection relies on spontaneous intrinsic activity in visual cortex prior to the incoming stimulus, their control by attention, and post-stimulus processes incorporating a re-entrant bias from frontal and parietal areas that depends on the task.
Matsumoto, Yukiko; Takahashi, Hideyuki; Murai, Toshiya; Takahashi, Hidehiko
Schizophrenia patients have impairments at several levels of cognition including visual attention (eye movements), perception, and social cognition. However, it remains unclear how lower-level cognitive deficits influence higher-level cognition. To elucidate the hierarchical path linking deficient cognitions, we focused on biological motion perception, which is involved in both the early stage of visual perception (attention) and higher social cognition, and is impaired in schizophrenia. Seventeen schizophrenia patients and 18 healthy controls participated in the study. Using point-light walker stimuli, we examined eye movements during biological motion perception in schizophrenia. We assessed relationships among eye movements, biological motion perception and empathy. In the biological motion detection task, schizophrenia patients showed lower accuracy and fixated longer than healthy controls. As opposed to controls, patients exhibiting longer fixation durations and fewer numbers of fixations demonstrated higher accuracy. Additionally, in the patient group, the correlations between accuracy and affective empathy index and between eye movement index and affective empathy index were significant. The altered gaze patterns in patients indicate that top-down attention compensates for impaired bottom-up attention. Furthermore, aberrant eye movements might lead to deficits in biological motion perception and finally link to social cognitive impairments. The current findings merit further investigation for understanding the mechanism of social cognitive training and its development.
Palmisano, Stephen; Gillam, Barbara
Experiments examined the accuracy of visual touchdown point perception during oblique descents (1.5?-15?) toward a ground plane consisting of (a) randomly positioned dots, (b) a runway outline, or (c) a grid. Participants judged whether the perceived touchdown point was above or below a probe that appeared at a random position following each…
Lewkowicz, David J.
Three experiments investigated perception of audio-visual (A-V) speech synchrony in 4- to 10-month-old infants. Experiments 1 and 2 used a convergent-operations approach by habituating infants to an audiovisually synchronous syllable (Experiment 1) and then testing for detection of increasing degrees of A-V asynchrony (366, 500, and 666 ms) or by…
Nicholls, Michael E. R.; Searle, Dara A.
This study explored asymmetries for movement, expression and perception of visual speech. Sixteen dextral models were videoed as they articulated: "bat," "cat," "fat," and "sat." Measurements revealed that the right side of the mouth was opened wider and for a longer period than the left. The asymmetry was accentuated at the beginning and ends of…
Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…
Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue
Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…
Mulckhuyse, Manon; Kelley, Todd A; Theeuwes, Jan; Walsh, Vincent; Lavie, Nilli
Transcranial magnetic stimulation (TMS) over the occipital pole can produce an illusory percept of a light flash (or 'phosphene'), suggesting an excitatory effect. Whereas previous reported effects produced by single-pulse occipital pole TMS are typically disruptive, here we report the first demonstration of a location-specific facilitatory effect on visual perception in humans. Observers performed a spatial cueing orientation discrimination task. An orientation target was presented in one of two peripheral placeholders. A single pulse below the phosphene threshold applied to the occipital pole 150 or 200 ms before stimulus onset was found to facilitate target discrimination in the contralateral compared with the ipsilateral visual field. At the 150-ms time window contralateral TMS also amplified cueing effects, increasing both facilitation effects for valid cues and interference effects for invalid cues. These results are the first to show location-specific enhanced visual perception with single-pulse occipital pole stimulation prior to stimulus presentation, suggesting that occipital stimulation can enhance the excitability of visual cortex to subsequent perception.
Journal of Optometric Education, 1988
A curriculum for disorders of oculomotor control, binocular vision, and visual perception, adopted by the Association of Schools and Colleges of Optometry, is outlined. The curriculum's 14 objectives in physiology, perceptual and cognitive development, epidemiology, public health, diagnosis and management, environmental influences, care delivery,…
Meng, Xiangzhi; Cheng-Lai, Alice; Zeng, Biao; Stein, John F.; Zhou, Xiaolin
The development of reading skills may depend to a certain extent on the development of basic visual perception. The magnocellular theory of developmental dyslexia assumes that deficits in the magnocellular pathway, indicated by less sensitivity in perceiving dynamic sensory stimuli, are responsible for a proportion of reading difficulties…
Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto
Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception.
Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; LaGasse, Linda L; Lester, Barry M; Wouldes, Trecia A; Thompson, Benjamin
Prenatal exposure to recreational drugs impairs motor and cognitive development; however it is currently unknown whether visual brain areas are affected. To address this question, we investigated the effect of prenatal drug exposure on global motion perception, a behavioural measure of processing within the dorsal extrastriate visual cortex that is thought to be particularly vulnerable to abnormal neurodevelopment. Global motion perception was measured in one hundred and forty-five 4.5-year-old children who had been exposed to different combinations of methamphetamine, alcohol, nicotine and marijuana prior to birth and 25 unexposed children. Self-reported drug use by the mothers was verified by meconium analysis. We found that global motion perception was impaired by prenatal exposure to alcohol and improved significantly by exposure to marijuana. Exposure to both drugs prenatally had no effect. Other visual functions such as habitual visual acuity and stereoacuity were not affected by drug exposure. Prenatal exposure to methamphetamine did not influence visual function. Our results demonstrate that prenatal drug exposure can influence a behavioural measure of visual development, but that the effects are dependent on the specific drugs used during pregnancy.
Youse, Kathleen M; Cienkowski, Kathleen M; Coelho, Carl A
The evaluation of auditory-visual speech perception is not typically undertaken in the assessment of aphasia; however, treatment approaches utilise bimodal presentations. Research demonstrates that auditory and visual information are integrated for speech perception. The strongest evidence of this cross-modal integration is the McGurk effect. This indirect measure of integration shows that presentation of conflicting tokens may change perception (e.g. auditory /bi/ + visual /gi/ = /di/). The purpose of this study was to investigate the ability of a person with mild aphasia to identify tokens presented in auditory-only, visual-only and auditory-visual conditions. It was hypothesized that performance would be best in the bimodal condition and that presence of the McGurk effect would demonstrate integration of speech information. Findings did not support the hypotheses. It is suspected that successful integration of AV speech information was limited by a perseverative response pattern. This case study suggests the use of bisensory speech information may be impaired in adults with aphasia.
Young, L. R.
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
Dodd, Barbara; McIntosh, Beth; Erdener, Dogu; Burnham, Denis
An example of the auditory-visual illusion in speech perception, first described by McGurk and MacDonald, is the perception of [ta] when listeners hear [pa] in synchrony with the lip movements for [ka]. One account of the illusion is that lip-read and heard speech are combined in an articulatory code since people who mispronounce words respond differently from controls on lip-reading tasks. A same-different judgment task assessing perception of the illusion showed no difference in performance between controls and children with speech difficulties. Another experiment compared children with delayed and disordered speech on perception of the illusion. While neither group perceived many illusions, a significant interaction indicated that children with disordered phonology were strongly biased to the auditory component while the delayed group's response was more evenly split between the auditory and visual components of the illusion. These findings suggest that phonological processing, rather than articulation, supports lip-reading ability.
DOCUMENTATION PAGE READ CSORTIGORs T. REPOR NUMBER 2. GOVT ACCESSION NO. S . RECIPIENT’S CATALOG NUMBER NO014-. l-C-O 4. TITLE (nd Subtitle) S . TYPE OF...REPORT A PERIOD COVERED Technical Report The. Primacy of Depth in Visual Percention ~ S . PERFORMING ORG. REPORT NUMBER 7. AUTHOR( s ) S . CONTRACT OR GRANT...NUMBER( s ) Robert Fox N00014-81-C-0001 S . PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS Vanderbilt
Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan
Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.
Moraes, Renato; de Freitas, Paulo Barbosa; Razuk, Milena; Barela, José Angelo
Sensory reweighting is a characteristic of postural control functioning adopted to accommodate environmental changes. The use of mono or binocular cues induces visual reduction/increment of moving room influences on postural sway, suggesting a visual reweighting due to the quality of available sensory cues. Because in our previous study visual conditions were set before each trial, participants could adjust the weight of the different sensory systems in an anticipatory manner based upon the reduction in quality of the visual information. Nevertheless, in daily situations this adjustment is a dynamical process and occurs during ongoing movement. The purpose of this study was to examine the effect of visual transitions in the coupling between visual information and body sway in two different distances from the front wall of a moving room. Eleven young adults stood upright inside of a moving room in two distances (75 and 150 cm) wearing a liquid crystal lenses goggles, which allow individual lenses transition from opaque to transparent and vice-versa. Participants stood still during five minutes for each trial and the lenses status changed every one minute (no vision to binocular vision, no vision to monocular vision, binocular vision to monocular vision, and vice-versa). Results showed that farther distance and monocular vision reduced the effect of visual manipulation on postural sway. The effect of visual transition was condition dependent, with a stronger effect when transitions involved binocular vision than monocular vision. Based upon these results, we conclude that the increased distance from the front wall of the room reduced the effect of visual manipulation on postural sway and that sensory reweighting is stimulus quality dependent, with binocular vision producing a much stronger down/up-weighting than monocular vision. PMID:26939058
A description is given of the knowledge representation data base in the perception subsystem of the Mars robot vehicle prototype. Two types of information are stored. The first is generic information that represents general rules that are conformed to by structures in the expected environments. The second kind of information is a specific description of a structure, i.e., the properties and relations of objects in the specific case being analyzed. The generic knowledge is represented so that it can be applied to extract and infer the description of specific structures. The generic model of the rules is substantially a Bayesian representation of the statistics of the environment, which means it is geared to representation of nondeterministic rules relating properties of, and relations between, objects. The description of a specific structure is also nondeterministic in the sense that all properties and relations may take a range of values with an associated probability distribution.
Song, Chen; Schwarzkopf, Dietrich Samuel; Kanai, Ryota; Rees, Geraint
Summary The anatomy of cerebral cortex is characterized by two genetically independent variables, cortical thickness and cortical surface area, that jointly determine cortical volume. It remains unclear how cortical anatomy might influence neural response properties and whether such influences would have behavioral consequences. Here, we report that thickness and surface area of human early visual cortices exert opposite influences on neural population tuning with behavioral consequences for perceptual acuity. We found that visual cortical thickness correlated negatively with the sharpness of neural population tuning and the accuracy of perceptual discrimination at different visual field positions. In contrast, visual cortical surface area correlated positively with neural population tuning sharpness and perceptual discrimination accuracy. Our findings reveal a central role for neural population tuning in linking visual cortical anatomy to visual perception and suggest that a perceptually advantageous visual cortex is a thinned one with an enlarged surface area. PMID:25619658
Arrighi, Roberto; Marini, Francesco; Burr, David
Robust perception requires efficient integration of information from our various senses. Much recent electrophysiology points to neural areas responsive to multisensory stimulation, particularly audiovisual stimulation. However, psychophysical evidence for functional integration of audiovisual motion has been ambiguous. In this study we measure perception of an audiovisual form of biological motion, tap dancing. The results show that the audio tap information interacts with visual motion information, but only when in synchrony, demonstrating a functional combination of audiovisual information in a natural task. The advantage of multimodal combination was better than the optimal maximum likelihood prediction.
Welchman, Andrew E; Kourtzi, Zoe
The rapid advances in brain imaging technology over the past 20 years are affording new insights into cortical processing hierarchies in the human brain. These new data provide a complementary front in seeking to understand the links between perceptual and physiological states. Here we review some of the challenges associated with incorporating brain imaging data into such "linking hypotheses," highlighting some of the considerations needed in brain imaging data acquisition and analysis. We discuss work that has sought to link human brain imaging signals to existing electrophysiological data and opened up new opportunities in studying the neural basis of complex perceptual judgments. We consider a range of approaches when using human functional magnetic resonance imaging to identify brain circuits whose activity changes in a similar manner to perceptual judgments and illustrate these approaches by discussing work that has studied the neural basis of 3D perception and perceptual learning. Finally, we describe approaches that have sought to understand the information content of brain imaging data using machine learning and work that has integrated multimodal data to overcome the limitations associated with individual brain imaging approaches. Together these approaches provide an important route in seeking to understand the links between physiological and psychological states.
Gilaie-Dotan, Sharon; Doron, Ravid
Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions.
Fan, J; Dai, W; Liu, F; Wu, J
Based on 69 scanned Chinese male subjects and 25 Caucasian male subjects, the present study showed that the volume height index (VHI) is the most important visual cue to male body attractiveness of young Chinese viewers among the many body parameters examined in the study. VHI alone can explain ca. 73% of the variance of male body attractiveness ratings. The effect of VHI can be fitted with two half bell-shaped exponential curves with an optimal VHI at 17.6 l m(-2) and 18.0 l m(-2) for female raters and male raters, respectively. In addition to VHI, other body parameters or ratios can have small, but significant effects on male body attractiveness. Body proportions associated with fitness will enhance male body attractiveness. It was also found that there is an optimal waist-to-hip ratio (WHR) at 0.8 and deviations from this optimal WHR reduce male body attractiveness.
Su, Yi-Huang; Salazar-López, Elvira
Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900
Su, Yi-Huang; Salazar-López, Elvira
Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.
Samuel, Arthur G.; Lieblich, Jerrold
The speech signal is often badly articulated, and heard under difficult listening conditions. To deal with these problems, listeners make use of various types of context. In the current study, we examine a type of context that in previous work has been shown to affect how listeners report what they hear: visual speech (i.e., the visible movements of the speaker’s articulators). Despite the clear utility of this type of context under certain conditions, prior studies have shown that visually-driven phonetic percepts (via the “McGurk” effect) are not “real” enough to affect perception of later-occurring speech; such percepts have not produced selective adaptation effects. This failure contrasts with successful adaptation by sounds that are generated by lexical context – the word that a sound occurs within. We demonstrate here that this dissociation is robust, leading to the conclusion that visual and lexical contexts operate differently. We suggest that the dissociation reflects the dual nature of speech as both a perceptual object and a linguistic object. Visual speech seems to contribute directly to the computations of the perceptual object, but not the linguistic one, while lexical context is used in both types of computations. PMID:24749935
Samuel, Arthur G; Lieblich, Jerrold
The speech signal is often badly articulated, and heard under difficult listening conditions. To deal with these problems, listeners make use of various types of context. In the current study, we examine a type of context that in previous work has been shown to affect how listeners report what they hear: visual speech (i.e., the visible movements of the speaker's articulators). Despite the clear utility of this type of context under certain conditions, prior studies have shown that visually driven phonetic percepts (via the "McGurk" effect) are not "real" enough to affect perception of later-occurring speech; such percepts have not produced selective adaptation effects. This failure contrasts with successful adaptation by sounds that are generated by lexical context-the word that a sound occurs within. We demonstrate here that this dissociation is robust, leading to the conclusion that visual and lexical contexts operate differently. We suggest that the dissociation reflects the dual nature of speech as both a perceptual object and a linguistic object. Visual speech seems to contribute directly to the computations of the perceptual object but not the linguistic one, while lexical context is used in both types of computations.
Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002
Leberl, F. W.
The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.
Krahmer, Emiel; Swerts, Marc
Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…
Liu, Jianli; Lughofer, Edwin; Zeng, Xianyi
Modeling human aesthetic perception of visual textures is important and valuable in numerous industrial domains, such as product design, architectural design, and decoration. Based on results from a semantic differential rating experiment, we modeled the relationship between low-level basic texture features and aesthetic properties involved in human aesthetic texture perception. First, we compute basic texture features from textural images using four classical methods. These features are neutral, objective, and independent of the socio-cultural context of the visual textures. Then, we conduct a semantic differential rating experiment to collect from evaluators their aesthetic perceptions of selected textural stimuli. In semantic differential rating experiment, eights pairs of aesthetic properties are chosen, which are strongly related to the socio-cultural context of the selected textures and to human emotions. They are easily understood and connected to everyday life. We propose a hierarchical feed-forward layer model of aesthetic texture perception and assign 8 pairs of aesthetic properties to different layers. Finally, we describe the generation of multiple linear and non-linear regression models for aesthetic prediction by taking dimensionality-reduced texture features and aesthetic properties of visual textures as dependent and independent variables, respectively. Our experimental results indicate that the relationships between each layer and its neighbors in the hierarchical feed-forward layer model of aesthetic texture perception can be fitted well by linear functions, and the models thus generated can successfully bridge the gap between computational texture features and aesthetic texture properties.
Liu, Jianli; Lughofer, Edwin; Zeng, Xianyi
Modeling human aesthetic perception of visual textures is important and valuable in numerous industrial domains, such as product design, architectural design, and decoration. Based on results from a semantic differential rating experiment, we modeled the relationship between low-level basic texture features and aesthetic properties involved in human aesthetic texture perception. First, we compute basic texture features from textural images using four classical methods. These features are neutral, objective, and independent of the socio-cultural context of the visual textures. Then, we conduct a semantic differential rating experiment to collect from evaluators their aesthetic perceptions of selected textural stimuli. In semantic differential rating experiment, eights pairs of aesthetic properties are chosen, which are strongly related to the socio-cultural context of the selected textures and to human emotions. They are easily understood and connected to everyday life. We propose a hierarchical feed-forward layer model of aesthetic texture perception and assign 8 pairs of aesthetic properties to different layers. Finally, we describe the generation of multiple linear and non-linear regression models for aesthetic prediction by taking dimensionality-reduced texture features and aesthetic properties of visual textures as dependent and independent variables, respectively. Our experimental results indicate that the relationships between each layer and its neighbors in the hierarchical feed-forward layer model of aesthetic texture perception can be fitted well by linear functions, and the models thus generated can successfully bridge the gap between computational texture features and aesthetic texture properties. PMID:26582987
Wade, Nicholas J
Pictures deceive the brain: they provide distillations of objects or ideas into simpler shapes. They create the impression of representing that which cannot be presented. Even at the level of the photograph, the links between pictorial images (the contents of pictures) and objects are tenuous. The dimensions of depth and motion are missing from a pictorial image, and this alone introduces all manner of potential ambiguities. The history of art can be considered as exploring the missing link between image and object. Pictorial images can be spatialized or stylized; spatialized images (like photographs) generally share some of the projective characteristics of the object represented. Written words are also images but they do not resemble the objects they represent--they are stylized or conventional. Pictures can also be illusions--deceptions of vision so that what is seen does not necessarily correspond to what is physically presented. Most of visual science is now concerned with pictorial images--two-dimensional displays on computer monitors. Is vision now the science of deception?
Kim, Eunhwi; Park, Young-Kyung; Byun, Yong-Hyun; Park, Mi-Sook; Kim, Hong
This study investigated age-related changes of cognitive function in Korean adults using the Korean-Developmental Test of Visual Perception-2 (K-DTVP-2) and the Visual Motor Integration-3rd Revision (VMI-3R) test, and determined the main factors influencing VP and VMI in older adults. For this research, 139 adults for the K-DTVP-2 and 192 adults for the VMI-3R, from a total of 283 participants, were randomly and separately recruited in province, Korea. The present study showed that the mean score of the K-DTVP-2 and VMI-3R in 10-yr age increments significantly decreased as age increased (K-DTVP-2, F= 41.120, P< 0.001; VMI-3R, F= 16.583, P< 0.001). The mean score of the VMI-3R and K-DTVP-2 were significantly decreased in participants in their 50s compared to those in their 20s (P< 0.05). Age (t= -9.130, P< 0.001), gender (t= 3.029, P= 0.003), and the presence of diseases (t= -2.504, P= 0.013) were the significant factors affecting K-DTVP-2 score. On the other hand, age (t= -6.300, P< 0.001) was the only significant factor affecting VMI-3R score. K-DTVP-2 score (Standardized β= -0.611) decreased more sensitively with aging than VMI-3R (Standardized β= -0.467). The two measurements had a significant positive correlation (r = 0.855, P< 0.001). In conclusion, it can be suggested that VP and VMI should be regularly checked from an individual's 50s, which is a critical period for detecting cognitive decline by aging. Both the K-DTVP-2 and VMI-3R could be used for determining the level of cognitive deficit by aging.
Staunaes, Dorthe; Kofoed, Jette
Digital video cameras, smartphones, internet and iPads are increasingly used as visual research methods with the purpose of creating an affective corpus of data. Such visual methods are often combined with interviews or observations. Not only are visual methods part of the used research methods, the visual products are used as requisites in…
opposite extreme of stressing the wholistic features of perception to the detriment of the possibility of partitioning the perceptual process...occurring at the same or higher levels (using ’levels’ in the rather careful sense that ^as laid out in the previous section). This analysis will...version of the template matching paradigm, which holds that the primary task of a visual system is recognition, i.e. matching an imape (of an
Mrowka, Ralf; Freytag, Alexander; Reuter, Stefanie
Human visual perception system is complex and involves a considerable portion of the brain's cortex. Hence, the wish to understand complex neuronal function is obvious, and the idea to model this by means of artificial neuronal networks might have been born at the time when first computational machines were constructed (Alan Turing, Intelligent machinery, 1948, h t t p: //www.npl.co.uk/about/history/notable-individuals/turing/intelligent-machinery) This article is protected by copyright. All rights reserved.
Wohlschläger, Afra M.; Glim, Sarah; Shao, Junming; Draheim, Johanna; Köhler, Lina; Lourenço, Susana; Riedl, Valentin; Sorg, Christian
The human brain’s ongoing activity is characterized by intrinsic networks of coherent fluctuations, measured for example with correlated functional magnetic resonance imaging signals. So far, however, the brain processes underlying this ongoing blood oxygenation level dependent (BOLD) signal orchestration and their direct relevance for human behavior are not sufficiently understood. In this study, we address the question of whether and how ongoing BOLD activity within intrinsic occipital networks impacts on conscious visual perception. To this end, backwardly masked targets were presented in participants’ left visual field only, leaving the ipsi-lateral occipital areas entirely free from direct effects of task throughout the experiment. Signal time courses of ipsi-lateral BOLD fluctuations in visual areas V1 and V2 were then used as proxies for the ongoing contra-lateral BOLD activity within the bilateral networks. Magnitude and phase of these fluctuations were compared in trials with and without conscious visual perception, operationalized by means of subjective confidence ratings. Our results show that ipsi-lateral BOLD magnitudes in V1 were significantly higher at times of peak response when the target was perceived consciously. A significant difference between conscious and non-conscious perception with regard to the pre-target phase of an intrinsic-frequency regime suggests that ongoing V1 fluctuations exert a decisive impact on the access to consciousness already before stimulation. Both effects were absent in V2. These results thus support the notion that ongoing slow BOLD activity within intrinsic networks covering V1 represents localized processes that modulate the degree of readiness for the emergence of visual consciousness. PMID:27601986
of targetA in1c’ntxi i,’jtio iretLrairilg triails giveni pri or to Th is exicr inmen t 4) run ii iei ia to lyj after Fxperim int I, uslnj t;]il same...cue would still require the retrieval of the to-be-attended location from memory . fut there is no longer jiuch reason to suspect that the time...dttention. The Journal of Neuroscience, 4, 1863-1874. Reeves, A., & Sperling, G. (in press). Attentiori rjati, it, sh1 r t term visual memory
Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J; Rees, Geraint; Behrmann, Marlene
Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral "form" visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se.
Lim, Jongil; Chang, Seung Ho; Lee, Jihyun; Kim, Kijeong
Mobile phone use while walking can cause dual-task interference and increases safety risks by increasing attentional and cognitive demands. While the interference effect on cognitive function has been examined extensively, how perception of the environment and walking dynamics are affected by mobile phone use while walking is not well understood. The amount of visual information loss and its consequent impact on dynamic walking stability was examined in this study. Young adults (mean, 20.3 years) volunteered and walked on a treadmill while texting and attending to visual tasks simultaneously. Performance of visual task, field of regard loss, and margin of stability under dual-task conditions were compared with those of single-task conditions (i.e., visual task only). The results revealed that the size of visual field and visual acuity demand were varied across the visual task conditions. Approximately half of the visual cues provided during texting while walking were not perceived as compared to the visual task only condition. The field of regard loss also increased with increased dual-task cost of mobile phone use. Dynamic walking stability, however, showed no significant differences between the conditions. Taken together, the results demonstrate that the loss of situational awareness is unavoidable and occurs simultaneously with decrements in concurrent task performance. The study indicates the importance of considering the nature of attentional resources for the studies in dual-task paradigm and may provide practical information to improve the safe use of mobile phones while walking. PMID:28349033
Moritz, Steffen; Göritz, Anja S; Van Quaquebeke, Niels; Andreou, Christina; Jungclaussen, David; Peters, Maarten J V
Studies revealed that patients with paranoid schizophrenia display overconfidence in errors for memory and social cognition tasks. The present investigation examined whether this pattern holds true for visual perception tasks. Nonclinical participants were recruited via an online panel. Individuals were asked to complete a questionnaire that included the Paranoia Checklist and were then presented with 24 blurry pictures; half contained a hidden object while the other half showed snowy (visual) noise. Participants were asked to state whether the visual items contained an object and how confident they were in their judgment. Data from 1966 individuals were included following a conservative selection process. Participants high on core paranoid symptoms showed a poor calibration of confidence for correct versus incorrect responses. In particular, participants high on paranoia displayed overconfidence in incorrect responses and demonstrated a 20% error rate for responses made with high confidence compared to a 12% error rate in participants with low paranoia scores. Interestingly, paranoia scores declined after performance of the task. For the first time, overconfidence in errors was demonstrated among individuals with high levels of paranoia using a visual perception task, tentatively suggesting it is a ubiquitous phenomenon. In view of the significant decline in paranoia across time, bias modification programs may incorporate items such as the one employed here to teach patients with clinical paranoia the fallibility of human cognition, which may foster subsequent symptom improvement.
A television program employing a visual metaphor should be an effective instructional tool. Concrete imagery should make the metaphor more memorable and the topic more comprehensible. Splitting the metaphor between audio and video channels should make a strongly unified message, because the audience would have to compare the verbal and visual…
Keetels, Mirjam; Vroomen, Jean
The authors examined the effects of a task-irrelevant sound on visual processing. Participants were presented with revolving clocks at or around central fixation and reported the hand position of a target clock at the time an exogenous cue (1 clock turning red) or an endogenous cue (a line pointing toward 1 of the clocks) was presented. A…
Bestelmeyer, Patricia E G; Rouger, Julien; DeBruine, Lisa M; Belin, Pascal
Previous research has demonstrated perceptual aftereffects for emotionally expressive faces, but the extent to which they can also be obtained in a different modality is unknown. In two experiments we show for the first time that adaptation to affective, non-linguistic vocalisations elicits significant auditory aftereffects. Adaptation to angry vocalisations caused voices drawn from an anger-fear morphed continuum to be perceived as less angry and more fearful, while adaptation to fearful vocalisations elicited opposite aftereffects (Experiment 1). We then tested the link between these aftereffects and the underlying acoustics by using caricatured adaptors. Although caricatures exaggerated the acoustical and affective properties of the vocalisations, the caricatured adaptors resulted in aftereffects which were comparable to those obtained with natural vocalisations (Experiment 2). Our findings suggest that these aftereffects cannot be solely explained by low-level adaptation to acoustical characteristics of the adaptors but are likely to depend on higher-level adaptation of neural representations of vocal affect.
Miller, Luke E; Longo, Matthew R; Saygin, Ayse P
Brief use of a tool recalibrates multisensory representations of the user's body, a phenomenon called tool embodiment. Despite two decades of research, little is known about its boundary conditions. It has been widely argued that embodiment requires active tool use, suggesting a critical role for somatosensory and motor feedback. The present study used a visual illusion to cast doubt on this view. We used a mirror-based setup to induce a visual experience of tool use with an arm that was in fact stationary. Following illusory tool use, tactile perception was recalibrated on this stationary arm, and with equal magnitude as physical use. Recalibration was not found following illusory passive tool holding, and could not be accounted for by sensory conflict or general interhemispheric plasticity. These results suggest visual tool-use signals play a critical role in driving tool embodiment.
Wells, James W. (Inventor); Mc Kay, Neil David (Inventor); Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor)
A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.
Leeds, Daniel D; Pyles, John A; Tarr, Michael J
The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm(3) brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features
King, Daniel J; Hodgekins, Joanne; Chouinard, Philippe A; Chouinard, Virginie-Anne; Sperandio, Irene
Specific abnormalities of vision in schizophrenia have been observed to affect high-level and some low-level integration mechanisms, suggesting that people with schizophrenia may experience anomalies across different stages in the visual system affecting either early or late processing or both. Here, we review the research into visual illusion perception in schizophrenia and the issues which previous research has faced. One general finding that emerged from the literature is that those with schizophrenia are mostly immune to the effects of high-level illusory displays, but this effect is not consistent across all low-level illusions. The present review suggests that this resistance is due to the weakening of top-down perceptual mechanisms and may be relevant to the understanding of symptoms of visual distortion rather than hallucinations as previously thought.
Beutter, B. R.; Stone, L. S.
Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion
Fink, Bernhard; Neuser, Frauke; Deloux, Gwenelle; Röder, Susanne; Matts, Paul J
Female hair color is thought to influence physical attractiveness, and although there is some evidence for this assertion, research has yet not addressed the question if and how physical damaging affects the perception of female hair color. Here we investigate whether people are sensitive (in terms of visual attention and age, health and attractiveness perception) to subtle differences in hair images of natural and colored hair before and after physical damaging. We tracked the eye-gaze of 50 men and 50 women aged 31-50 years whilst they viewed randomized pairs of images of 20 natural and 20 colored hair tresses, each pair displaying the same tress before and after controlled cuticle damage. The hair images were then rated for perceived health, attractiveness, and age. Undamaged versions of natural and colored hair were perceived as significantly younger, healthier, and more attractive than corresponding damaged versions. Visual attention to images of undamaged colored hair was significantly higher compared with their damaged counterparts, while in natural hair, the opposite pattern was found. We argue that the divergence in visual attention to undamaged colored female hair and damaged natural female hair and associated ratings is due to differences in social perception and discuss the source of apparent visual difference between undamaged and damaged hair.
Charalampidou, S; Nolan, J; Ormonde, G O; Beatty, S
Purpose The purpose of this study was to conduct a questionnaire-based survey of subjective visual perceptions induced by intravitreous (IVT) injections of therapeutic agents. Patients and methods Patients undergoing an IVT injection of ranibizumab, pegaptanib sodium, or triamcinolone acetonide were administered a questionnaire in the immediate post-injection period and at 2 weeks of follow-up. Results In the immediate post-injection period (75 IVT injections, 75 eyes, 75 patients), lights and floaters were reported after 20 (27%) and 24 (32%) IVT injections, respectively. In comparison, at the 2-week follow-up, the incidence of reported lights (11; 15%) was similar (P>0.05), but the incidence of reported floaters was higher (48; 64% P=0.00). Subgroup analysis for various injection subgroups (no previous injection vsprevious injection(s) in the study eye; injections in study eyes with good VA (logarithm of minimal angle of resolution [logMAR] ≤0.3) vsmoderate VA (0.7
Hosseini, Seyed Mahmood; Rezaei, Rohollah
This descriptive survey research was undertaken to design appropriate programs for the creation of a positive perception of nanotechnology among their intended beneficiaries. In order to do that, the factors affecting positive perceptions were defined. A stratified random sample of 278 science board members was selected out of 984 researchers who were working in 22 National Agricultural Research Institutions (NARIs). Data were collected by using a mailed questionnaire. The descriptive results revealed that more than half of the respondents had "low" or "very low" familiarity with nanotechnology. Regression analysis indicated that the perceptions of Iranian NARI Science Board Members towards nanotechnology were explained by three variables: the level of their familiarity with emerging applications of nanotechnology in agriculture, the level of their familiarity with nanotechnology and their work experiences. The findings of this study can contribute to a better understanding of the present situation of the development of nanotechnology and the planning of appropriate programs for creating a positive perception of nanotechnology.
Medina, Jared; Drebing, Daniel E.; Hamilton, Roy H.; Coslett, H. Branch
Recent studies have found preferential responses for brief, transient visual stimuli near the hands, suggesting a link between magnocellular visual processing and peripersonal representations. We report an individual with a right hemisphere lesion whose illusory phantom percepts may be attributable to an impairment in the peripersonal system specific to transient visual stimuli. When presented with a single, brief (250 ms) visual stimulus to her ipsilesional side, she reported visual percepts on both sides – synchiria. These contralesional phantoms were significantly more frequent when visual stimuli were presented on the hands versus off the hands. We next manipulated stimulus duration to examine the relationship between these phantom percepts and transient visual processing. We found a significant position by duration interaction, with substantially more phantom synchiric percepts on the hands for brief compared to sustained stimuli. This deficit provides novel evidence both for preferential processing of transient visual stimuli near the hands, and for mechanisms that, when damaged, result in phantom percepts. PMID:26779938
Washington County Public Schools, Washington, PA.
Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…
Busch, Niko A; Dubois, Julien; VanRullen, Rufin
Oscillations are ubiquitous in electrical recordings of brain activity. While the amplitude of ongoing oscillatory activity is known to correlate with various aspects of perception, the influence of oscillatory phase on perception remains unknown. In particular, since phase varies on a much faster timescale than the more sluggish amplitude fluctuations, phase effects could reveal the fine-grained neural mechanisms underlying perception. We presented brief flashes of light at the individual luminance threshold while EEG was recorded. Although the stimulus on each trial was identical, subjects detected approximately half of the flashes (hits) and entirely missed the other half (misses). Phase distributions across trials were compared between hits and misses. We found that shortly before stimulus onset, each of the two distributions exhibited significant phase concentration, but at different phase angles. This effect was strongest in the theta and alpha frequency bands. In this time-frequency range, oscillatory phase accounted for at least 16% of variability in detection performance and allowed the prediction of performance on the single-trial level. This finding indicates that the visual detection threshold fluctuates over time along with the phase of ongoing EEG activity. The results support the notion that ongoing oscillations shape our perception, possibly by providing a temporal reference frame for neural codes that rely on precise spike timing.
Ai, Lei; Ro, Tony
Previous studies have shown that neural oscillations in the 8- to 12-Hz range influence sensory perception. In the current study, we examined whether both the power and phase of these mu/alpha oscillations predict successful conscious tactile perception. Near-threshold tactile stimuli were applied to the left hand while electroencephalographic (EEG) activity was recorded over the contralateral right somatosensory cortex. We found a significant inverted U-shaped relationship between prestimulus mu/alpha power and detection rate, suggesting that there is an intermediate level of alpha power that is optimal for tactile perception. We also found a significant difference in phase angle concentration at stimulus onset that predicted whether the upcoming tactile stimulus was perceived or missed. As has been shown in the visual system, these findings suggest that these mu/alpha oscillations measured over somatosensory areas exert a strong inhibitory control on tactile perception and that pulsed inhibition by these oscillations shapes the state of brain activity necessary for conscious perception. They further suggest that these common phasic processing mechanisms across different sensory modalities and brain regions may reflect a common underlying encoding principle in perceptual processing that leads to momentary windows of perceptual awareness.
Grishin, Vladimir; Kovalerchuk, Boris
Although shape perception is the main information channel for brain, it has been poor used by recent visualization techniques. The difficulties of its modeling are key obstacles for visualization theory and application. Known experimental estimates of shape perception capabilities have been made for low data dimension, and they were usually not connected with data structures. More applied approach for certain data structures detection by means of shape displays are considered by the example of analytical and experimental comparison of popular now Parallel Coordinates (PCs), i.e. 2D Cartesian displays of data vectors, with polar displays known as stars. Advantages of stars vs. PCs by Gestalt Laws are shown. About twice faster feature selection and classification with stars than PCs are showed by psychological experiments for hyper-tubes structures detection in data space with dimension up to 100-200 and its subspaces. This demonstrates great reserves of visualization enhancement in comparison with many recent techniques usually focused on few data attributes analysis.
Crane, Benjamin Thomas
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Pisarchik, Alexander N.; Bashkirtseva, Irina; Ryashko, Lev
Modern trends in physiology, psychology and cognitive neuroscience suggest that noise is an essential component of brain functionality and self-organization. With adequate noise the brain as a complex dynamical system can easily access different ordered states and improve signal detection for decision-making by preventing deadlocks. Using a stochastic sensitivity function approach, we analyze how sensitive equilibrium points are to Gaussian noise in a bistable energy model often used for qualitative description of visual perception. The probability distribution of noise-induced transitions between two coexisting percepts is calculated at different noise intensity and system stability. Stochastic squeezing of the hysteresis range and its transition from positive (bistable regime) to negative (intermittency regime) are demonstrated as the noise intensity increases. The hysteresis is more sensitive to noise in the system with higher stability.
Cloquell-Ballester, Vicente-Agustin Carmen Torres-Sibille, Ana del; Cloquell-Ballester, Victor-Andres; Santamarina-Siurana, Maria Cristina
The objective of this investigation is to evaluate how visual perception varies as the rural landscape is altered by human interventions of varying character. An experiment is carried out using Semantic Differential Analysis to analyse the effect of the character and the type of the intervention on perception. Interventions are divided into elements of 'permanent industrial character', 'elements of permanent rural character' and 'elements of temporary character', and these categories are sub-divided into smaller groups according to the type of development. To increase the reliability of the results, the Intraclass Correlation Coefficient tool, is applied to validate the semantic space of the perceptual responses and to determine the number of subjects required for a reliable evaluation of the scenes.
Lin, Chin-Chiuan; Huang, Kuo-Chen
An empirical study was carried out to examine the effects of color combination and ambient illumination on visual perception time using TFT-LCD. The effect of color combination was broken down into two subfactors, luminance contrast ratio and chromaticity contrast. Analysis indicated that the luminance contrast ratio and ambient illumination had significant, though small effects on visual perception. Visual perception time was better at high luminance contrast ratio than at low luminance contrast ratio. Visual perception time under normal ambient illumination was better than at other ambient illumination levels, although the stimulus color had a confounding effect on visual perception time. In general, visual perception time was better for the primary colors than the middle-point colors. Based on the results, normal ambient illumination level and high luminance contrast ratio seemed to be the optimal choice for design of workplace with video display terminals TFT-LCD.
Brown, Ted; Rodger, Sylvia
Visual perceptual skills of school-age children are often assessed using the Supplemental Developmental Test of Visual Perception of the Developmental Test of Visual-Motor Integration. The study purpose was to consider the construct validity of this test by evaluating its scalability (interval level measurement), unidimensionality, differential item functioning, and hierarchical ordering of its items. Visual perceptual performance scores from a sample of 356 typically developing children (171 boys and 185 girls ages 5 to 11 years) were used to complete a Rasch analysis of the test. Seven items were discarded for poor fit, while none of the items exhibited differential item functioning by sex. The construct validity, scalability, hierarchical ordering, and lack of differential item functioning requirements were met by the final test version. Since 7 test items did not fit the Rasch analysis specifications, the clinical value of the test is questionable and limited.
Akram, Muhammad Javaid; Raza, Syed Ahmad; Khaleeq, Abdur Rehman; Atika, Samrana
This study investigated the perception of principals on how the factors of subject mastery, teaching methodology, personal characteristics, and attitude toward students affect the performance of teachers at higher secondary level in the Punjab. All principals of higher secondary level in the Punjab were part of the population of the study. From…
McCullough, Stephen; Emmorey, Karen
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…
Almerigogna, Jehanne; Ost, James; Akehurst, Lucy; Fluck, Mike
We conducted two studies to examine how interviewers' nonverbal behaviors affect children's perceptions and suggestibility. In the first study, 42 8- to 10-year-olds watched video clips showing an interviewer displaying combinations of supportive and nonsupportive nonverbal behaviors and were asked to rate the interviewer on six attributes (e.g.,…
Starks, Scott A.
The development of an approach to the visual perception of object surface information using laser range data in support of robotic grasping is discussed. This is a very important problem area in that a robot such as the EVAR must be able to formulate a grasping strategy on the basis of its knowledge of the surface structure of the object. A description of the problem domain is given as well as a formulation of an algorithm which derives an object surface description adequate to support robotic grasping. The algorithm is based upon concepts of differential geometry namely, Gaussian and mean curvature.
Ryakhovsky, A N; Kalacheva, Ya A
The article presents a study on the impact of violations of aesthetic parameters as the inclination of the incisalline, thedislocation of median interincisal line and the width of dental arch on visual perception. Comparison of the data of objective assessment data the subjective assessment of the respondents was conducted. It is proved that all dependencies are linear and can be described by a linear regression equations. A similar method can be used for objective quantitative method for the assessment of aesthetics teeth when you smile before and after dental treatment.
VISUAL PERCEPTION, FACTOR ANALYSIS ), (*NERVOUS SYSTEM, MATHEMATICAL MODELS), THEORY, EXPERIMENTAL DATA, CORRELATION TECHNIQUES, PHYSIOLOGY, ANATOMICAL MODELS, INTERACTIONS, DECISION MAKING, ENVIRONMENT, BEHAVIOR
Harmening, Wolf M; Wagner, Hermann
Barn owls are nocturnal predators which have evolved specific sensory and morphological adaptations to a life in dim light. Here, some of the most fundamental properties of spatial vision in barn owls are reviewed. The eye with its tubular shape is rigidly integrated in the skull so that eye movements are very much restricted. The eyes are oriented frontally, allowing for a large binocular overlap. Accommodation, but not pupil dilation, is coupled between the two eyes. The retina is rod dominated and lacks a visible fovea. Retinal ganglion cells form a marked region of highest density that extends to a horizontally oriented visual streak. Behavioural visual acuity and contrast sensitivity are poor, although the optical quality of the ocular media is excellent. A low f-number allows high image quality at low light levels. Vernier acuity was found to be a hyperacute percept. Owls have global stereopsis with hyperacute stereo acuity thresholds. Neurons of the visual Wulst are sensitive to binocular disparities. Orientation based saliency was demonstrated in a visual-search experiment, and higher cognitive abilities were shown when the owl's were able to use illusory contours for object discrimination.
Klein, Sheryl; Guiltner, Val; Sollereder, Patti; Cui, Ying
Occupational therapists assess fine motor, visual motor, visual perception, and visual skill development, but knowledge of the relationships between scores on sensorimotor performance measures and handwriting legibility and speed is limited. Ninety-nine students in grades three to six with learning and/or behavior problems completed the Upper-Limb…
Pereverzeva, Maria; Murray, Scott O
Lightness perception is strongly dependent on context, including the relative luminance of the adjacent surfaces, spatial configuration, and luminance contrast. The latter, local luminance contrast, is thought to be processed in relatively early stages of visual processing and has been shown to play a crucial role in lightness perception. However, more global processing, such as perceptual grouping of surfaces, can also have an effect on lightness perception. An unresolved question, which we will address in this paper, is how global and local processes interact. We used a static gray disk embedded in a temporally modulated in luminance ring, which gives rise to a lightness effect dependent on local luminance contrast. We manipulated global image information by presenting the stimulus on backgrounds of different luminances. Surprisingly, the induction effect was greatly attenuated at a background luminance equal to that of the disk. We show that this finding cannot be explained by common lightness induction models. However, it is consistent with an effect of grouping on lightness perception and demonstrates how processes that are dependent on local edge information can be overridden by global image information.
Daly, Scott J.
Attending any conference on visual perception undoubtedly leaves one exposed to the work of Salvador Dali, whose extended phase of work exploring what he dubbed, "the paranoiac-critical method" is very popular as examples of multiple perceptions from conflicting input. While all visual art is intertwined with perceptual science, from convincing three-dimensional illusion during the Renaissance to the isolated visual illusions of Bridget Riley"s Op-Art, direct statements about perception are rarely uttered by the artists in recent times. However, there are still a number of artists working today whose work contains perceptual questions and exemplars that can be of interest to vision scientists and imaging engineers. This talk will start sampling from Op-Art, which is most directly related to psychophysical test stimuli and then will discuss "perceptual installations" from artists such as James Turrell"s, whose focus is often directly on natural light, with no distortions imposed by any capture or display apparatus. His work generally involves installations that use daylight and focus the viewer on its nuanced qualities, such as umbra, air particle interactions, and effects of light adaptation. He is one of the last artists to actively discuss perception. Next we discuss minimal art and electronic art, with video artist Nam June Paik discussing the "intentionally boring" art of minimalism. Another artist using installations is Sandy Skoglund, who creates environments of constant spectral albedo, with the exception of her human occupants. Tom Shannon also uses installations as his media to delve into 3D aspects of depth and perspective, but in an atomized fashion. Beginning with installation concepts, Calvin Collum then adds the restrictive viewpoint of photography to create initially confusing images where the pictorial content and depth features are independent (analogous to the work of Patrick Hughes). Andy Goldsworthy also combines photography with concepts of
Stockburger, Jessica; Renner, Britta; Weike, Almut I; Hamm, Alfons O; Schupp, Harald T
Vegetarianism provides a model system to examine the impact of negative affect towards meat, based on ideational reasoning. It was hypothesized that meat stimuli are efficient attention catchers in vegetarians. Event-related brain potential recordings served to index selective attention processes at the level of initial stimulus perception. Consistent with the hypothesis, late positive potentials to meat pictures were enlarged in vegetarians compared to omnivores. This effect was specific for meat pictures and obtained during passive viewing and an explicit attention task condition. These findings demonstrate the attention capture of food stimuli, deriving affective salience from ideational reasoning and symbolic meaning.
Lindquist, Kristen A; Gendron, Maria; Barrett, Lisa Feldman; Dickerson, Bradford C
For decades, psychologists and neuroscientists have hypothesized that the ability to perceive emotions on others' faces is inborn, prelinguistic, and universal. Concept knowledge about emotion has been assumed to be epiphenomenal to emotion perception. In this article, we report findings from 3 patients with semantic dementia that cannot be explained by this "basic emotion" view. These patients, who have substantial deficits in semantic processing abilities, spontaneously perceived pleasant and unpleasant expressions on faces, but not discrete emotions such as anger, disgust, fear, or sadness, even in a task that did not require the use of emotion words. Our findings support the hypothesis that discrete emotion concept knowledge helps transform perceptions of affect (positively or negatively valenced facial expressions) into perceptions of discrete emotions such as anger, disgust, fear, and sadness. These findings have important consequences for understanding the processes supporting emotion perception.
Lindquist, Kristen A.; Gendron, Maria; Feldman Barrett, Lisa; Dickerson, Bradford C.
For decades, psychologists and neuroscientists have hypothesized that the ability to perceive emotions on others’ faces is inborn, pre-linguistic, and universal. Concept knowledge about emotion has been assumed to be epiphenomenal to emotion perception. In this paper, we report findings from three patients with semantic dementia that cannot be explained by this “basic emotion” view. These patients, who have substantial deficits in semantic processing abilities, spontaneously perceived pleasant and unpleasant expressions on faces, but not discrete emotions such as anger, disgust, fear, or sadness, even in a task that did not require the use of emotion words. Our findings support the hypothesis that discrete emotion concept knowledge helps transform perceptions of affect (positively or negatively valenced facial expressions) into perceptions of discrete emotions such as anger, disgust, fear and sadness. These findings have important consequences for understanding the processes supporting emotion perception. PMID:24512242
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.
Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.
Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…
Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen
A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis.
Zito, Giuseppe A.; Cazzoli, Dario; Müri, René M.; Mosimann, Urs P.; Nef, Tobias
Perceptual accuracy is known to be influenced by stimuli location within the visual field. In particular, it seems to be enhanced in the lower visual hemifield (VH) for motion and space processing, and in the upper VH for object and face processing. The origins of such asymmetries are attributed to attentional biases across the visual field, and in the functional organization of the visual system. In this article, we tested content-dependent perceptual asymmetries in different regions of the visual field. Twenty-five healthy volunteers participated in this study. They performed three visual tests involving perception of shapes, orientation and motion, in the four quadrants of the visual field. The results of the visual tests showed that perceptual accuracy was better in the lower than in the upper visual field for motion perception, and better in the upper than in the lower visual field for shape perception. Orientation perception did not show any vertical bias. No difference was found when comparing right and left VHs. The functional organization of the visual system seems to indicate that the dorsal and the ventral visual streams, responsible for motion and shape perception, respectively, show a bias for the lower and upper VHs, respectively. Such a bias depends on the content of the visual information. PMID:27378876
Semin, Gün R.; Oudejans, Raôul R. D.; Beek, Peter J.
In the New Look literature of the 1950s, it has been suggested that size judgments are dependent on the affective content of stimuli. This suggestion, however, has been ‘discredited’ due to contradictory findings and methodological problems. In the present study, we revisited this forgotten issue in two experiments. The first experiment investigated the influence of affective content on size perception by examining judgments of the size of target circles with and without affectively loaded (i.e., positive, neutral, and negative) pictures. Circles with a picture were estimated to be smaller than circles without a picture, and circles with a negative picture were estimated to be larger than circles with a positive or a neutral picture confirming the suggestion from the 1950s that size perception is influenced by affective content, an effect notably confined to negatively loaded stimuli. In a second experiment, we examined whether affective content influenced the Ebbinghaus illusion. Participants judged the size of a target circle whereby target and flanker circles differed in affective loading. The results replicated the first experiment. Additionally, the Ebbinghaus illusion was shown to be weakest for a negatively loaded target with positively loaded and blank flankers. A plausible explanation for both sets of experimental findings is that negatively loaded stimuli are more attention demanding than positively loaded or neutral stimuli. PMID:17410379
This article outlines an exploration into the development of visual perception through analysing the process of taking photographs of the mundane as small-scale research. A preoccupation with social construction of the visual lies at the heart of the investigation by correlating the perceptive process to Mitchell's (2002) counter thesis for visual…
Most, Tova; Aviner, Chen
This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.
Denison, Rachel N.; Driver, Jon; Ruff, Christian C.
Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067
The study of body image-related problems in non-Western countries is still very limited. Thus, this study aims to identify the main influential sources and show how they affect the body image perceptions of Bangkok adolescents. The researcher recruited 400 Thai male and female adolescents in Bangkok, attending high school to freshmen level, ranging from 16-19 years, to participate in this study. Survey questionnaires were distributed to every student and follow-up interviews conducted with 40 students. The findings showed that there are eight main influential sources respectively ranked from the most influential to the least influential: magazines, television, peer group, familial, fashion trend, the opposite gender, self-realization and health knowledge. Similar to those studies conducted in Western countries, more than half of the total percentage was the influence of mass media and peer groups. Bangkok adolescents also internalized Western ideal beauty through these mass media channels. Alike studies conducted in the West, there was similarities in the process of how these influential sources affect Bangkok adolescent body image perception, with the exception of familial source. In conclusion, taking the approach of identifying the main influential sources and understanding how they affect adolescent body image perceptions can help prevent adolescents from having unhealthy views and taking risky measures toward their bodies. More studies conducted in non-Western countries are needed in order to build a cultural sensitive program, catered to the body image problems occurring in adolescents within that particular society.
Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal. PMID:27551361
García-Pérez, Miguel A; Alcalá-Quintana, Rocío
Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.
Lopez, Christophe; Lacour, Michel; Léonard, Jacques; Magnan, Jacques; Borel, Liliane
Visual vertical perception, posture and equilibrium are impaired in patients with a unilateral vestibular loss. The present study was designed to investigate whether body position (standing upright, sitting on a chair and lying supine) influences the visual vertical perception in Menière's patients tested before and after a unilateral vestibular neurotomy. Data were compared with sex- and age-matched healthy participants. During the first postoperative month the body position strongly influences the visual vertical perception. The ipsilesional deviation of the visual vertical judgment gradually increased from standing upright to sitting and to lying supine. The present data indicate that visual vertical perception improves when postural control is more demanding. This suggests that postural balance is a key reference for vertical perception, at least up to one month after vestibular loss.
Teoli, Dac A; Smith, Merideth D; Leys, Monique J; Jain, Priyanka; Odom, J Vernon
Eye-related pathological conditions such as glaucoma, diabetic retinopathy, and age-related macular degeneration commonly lead to decreased peripheral/central field, decreased visual acuity, and increased functional disability. We sought to answer if relationships exist between measures of visual function and reported prosocial behaviors in an older adult population with eye-related diagnoses. The sample consisted of adults, aged ≥ 60 years old, at an academic hospital's eye institute. Vision ranged from normal to severe impairment. Medical charts determined the visual acuities, ocular disease, duration of disease (DD), and visual fields (VF). Measures of giving help were via validated questionnaires on giving formal support (GFS) and giving informal support; measures of help received were perceived support (PS) and informal support received (ISR). ISR had subscales: tangible support (ISR-T), emotional support (ISR-E), and composite (ISR-C). Visual acuities of the better and worse seeing eyes were converted to LogMAR values. VF information converted to a 4-point rating scale of binocular field loss severity. DD was in years. Among 96 participants (mean age 73.28; range 60-94), stepwise regression indicated a relationship of visual variables to GFS (p < 0.05; Multiple R (2) = 0.1679 with acuity-better eye, VF rating, and DD), PS (p < 0.05; Multiple R (2) = 0.2254 with acuity-better eye), ISR-C (p < 0.05; Multiple R (2) = 0.041 with acuity-better eye), and ISR-T (p < 0.05; Multiple R (2) = 0.1421 with acuity-better eye). The findings suggest eye-related conditions can impact levels and perceptions of support exchanges. Our data reinforces the importance of visual function as an influence on prosocial behavior in older adults.
Schmitz, Taylor W; De Rosa, Eve; Anderson, Adam K
Positive and negative emotional states are thought to have originated from fundamentally opposing approach and avoidance behaviors. Furthermore, affective valence has been hypothesized to exert opposing biases in cognitive control. Here we examined with functional magnetic resonance imaging whether the opposing influences of positive and negative states extend to perceptual encoding in the visual cortices. Based on prior behavioral research, we hypothesized that positive states would broaden and negative states would narrow visual field of view (FOV). Positive, neutral, and negative states were induced on alternating blocks. To index FOV, observers then viewed brief presentations (300 ms) of face/place concentric center/surround stimuli on interleaved blocks. Central faces were attended, rendering the place surrounds unattended. As face and place information was presented at different visual eccentricities, our physiological metric of FOV was a valence-dependent modulation of place processing in the parahippocampal place area (PPA). Consistent with our hypotheses, positive affective states increased and negative states decreased PPA response to novel places as well as adaptation to repeated places. Individual differences in self-reported positive and negative affect correlated inversely with PPA encoding of peripheral places, as well as with activation in the mesocortical prefrontal cortex and amygdala. Psychophysiological interaction analyses further demonstrated that valence-dependent responses in the PPA arose from opponent coupling with extrafoveal regions of the primary visual cortex during positive and negative states. These findings collectively suggest that affective valence differentially biases gating of early visual inputs, fundamentally altering the scope of perceptual encoding.
Zhytaryuk, V. G.
This paper studies and investigated the issue of physical principles of visual perception blinking images spokes of a wheel that rotates in alternating and direct the reflected light fields. The research results make it possible to clearly interpret observations stroboscopic effect of the rotating spoke wheels of the car, propeller aircraft, domestic fans. Established that the observation of these defects is possible only when illuminated by artificial fluorescent, discharge and pulsed light source. Discovered fact "capture", ie observation as a separate fixed needles at frequencies far exceeding the published data, which this time is 0.1 sec (10 Hz). Established that there is a capture at frequencies up to and including 50 Hz. This result is not described in the scientific literature and no explanation of the theory.
Sleigh, Merry J; Smith, Aimee W; Laboe, Jason
Abstract Facebook users must make choices about level of self-disclosure, and this self-disclosure can influence perceptions of the profile's author. We examined whether the specific type of self-disclosure on a professor's profile would affect students' perceptions of the professor and expectations of his classroom. We created six Facebook profiles for a fictitious male professor, each with a specific emphasis: politically conservative, politically liberal, religious, family oriented, socially oriented, or professional. Undergraduate students randomly viewed one profile and responded to questions that assessed their perceptions and expectations. The social professor was perceived as less skilled but more popular, while his profile was perceived as inappropriate and entertaining. Students reacted more strongly and negatively to the politically focused profiles in comparison to the religious, family, and professional profiles. Students reported being most interested in professional information on a professor's Facebook profile, yet they reported being least influenced by the professional profile. In general, students expressed neutrality about their interest in finding and friending professors on Facebook. These findings suggest that students have the potential to form perceptions about the classroom environment and about their professors based on the specific details disclosed in professors' Facebook profiles.
Kitadono, Keiko; Humphreys, Glyn W
We report a series of 7 experiments examining the interaction between visual perception and action programming, contrasting 2 neuropsychological cases: a case of visual extinction and a case with extinction and optic ataxia. The patients had to make pointing responses to left and right locations, whilst identifying briefly presented shapes. Different patterns of performance emerged with the two cases. The patient with "pure" extinction (i.e., extinction without optic ataxia) showed dramatic effects of action programming on perceptual report. Programming an action to the ipsilesional side increased extinction (on 2-item trials) and tended to induce neglect (on 1-item trials); this was ameliorated when the action was programmed to the contralesional side. Separable effects of using the contralesional hand and pointing to the contralesional side were apparent. In contrast, the optic ataxic patient showed few effects of congruency between the visual stimulus and the action, but extinction when an action was programmed. This effect was particularly marked when actions had to be made to peripheral locations, suggesting that it reflected reduced resources to stimuli. These effects all occurred using stimulus exposures that were completed well before actions were effected. The data demonstrate interactions between action programming and visual perception. Programming an action to the affected side with the contralesional limb reduces "pure" extinction because attention is coupled to the end point of the action. However, in a patient with deficient visuo-motor coupling (optic ataxia), programming an action can increase a spatial deficit by recruiting resources away from perceptual processing. The implications for models of perception and action are discussed.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Orgs, Guido; Dovern, Anna; Hagura, Nobuhiro; Haggard, Patrick; Fink, Gereon R.; Weiss, Peter H.
The human brain readily perceives fluent movement from static input. Using functional magnetic resonance imaging, we investigated brain mechanisms that mediate fluent apparent biological motion (ABM) perception from sequences of body postures. We presented body and nonbody stimuli varying in objective sequence duration and fluency of apparent movement. Three body postures were ordered to produce a fluent (ABC) or a nonfluent (ACB) apparent movement. This enabled us to identify brain areas involved in the perceptual reconstruction of body movement from identical lower-level static input. Participants judged the duration of a rectangle containing body/nonbody sequences, as an implicit measure of movement fluency. For body stimuli, fluent apparent motion sequences produced subjectively longer durations than nonfluent sequences of the same objective duration. This difference was reduced for nonbody stimuli. This body-specific bias in duration perception was associated with increased blood oxygen level-dependent responses in the primary (M1) and supplementary motor areas. Moreover, fluent ABM was associated with increased functional connectivity between M1/SMA and right fusiform body area. We show that perceptual reconstruction of fluent movement from static body postures does not merely enlist areas traditionally associated with visual body processing, but involves cooperative recruitment of motor areas, consistent with a “motor way of seeing”. PMID:26534907
Orgs, Guido; Dovern, Anna; Hagura, Nobuhiro; Haggard, Patrick; Fink, Gereon R; Weiss, Peter H
The human brain readily perceives fluent movement from static input. Using functional magnetic resonance imaging, we investigated brain mechanisms that mediate fluent apparent biological motion (ABM) perception from sequences of body postures. We presented body and nonbody stimuli varying in objective sequence duration and fluency of apparent movement. Three body postures were ordered to produce a fluent (ABC) or a nonfluent (ACB) apparent movement. This enabled us to identify brain areas involved in the perceptual reconstruction of body movement from identical lower-level static input. Participants judged the duration of a rectangle containing body/nonbody sequences, as an implicit measure of movement fluency. For body stimuli, fluent apparent motion sequences produced subjectively longer durations than nonfluent sequences of the same objective duration. This difference was reduced for nonbody stimuli. This body-specific bias in duration perception was associated with increased blood oxygen level-dependent responses in the primary (M1) and supplementary motor areas. Moreover, fluent ABM was associated with increased functional connectivity between M1/SMA and right fusiform body area. We show that perceptual reconstruction of fluent movement from static body postures does not merely enlist areas traditionally associated with visual body processing, but involves cooperative recruitment of motor areas, consistent with a "motor way of seeing".
Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and those with schizophrenia—a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. PMID:26299386
Dennis, Maureen; Fletcher, Jack M; Rogers, Tracey; Hetherington, Ross; Francis, David J
Children with spina bifida and hydrocephalus (SBH) have long been known to have difficulties with visual perception. We studied how children with SBH perform 12 visual perception tasks requiring object identification, multistable representations of visual space, or visually guided overt actions. Four tasks required object-based processing (visual constancy illusions, face recognition, recognition of fragmented objects, line orientation). Four tasks required the representation of visual space in egocentric coordinates (stereopsis, visual figure-ground identification, perception of multistable figures, egocentric mental rotation). Four tasks required the coupling of visual space to overt movement (visual pursuit, figure drawing, visually guided route finding, visually guided route planning). Effect sizes, measuring the magnitude of the difference between SBH children and controls, were consistently larger for action-based than object-based visual perception tasks. Within action-based tasks, effect sizes were large and roughly comparable for tasks requiring the representation of visual space and for tasks requiring visually guided action. The results are discussed in terms of the physical and brain problems of children with SBH that limit their ability to build effective situation models of space.
Rojas, David; Kapralos, Bill; Hogue, Andrew; Collins, Karen; Nacke, Lennart; Cristancho, Sayra; Conati, Cristina; Dubrowski, Adam
Visual and auditory cues are important facilitators of user engagement in virtual environments and video games. Prior research supports the notion that our perception of visual fidelity (quality) is influenced by auditory stimuli. Understanding exactly how our perception of visual fidelity changes in the presence of multimodal stimuli can potentially impact the design of virtual environments, thus creating more engaging virtual worlds and scenarios. Stereoscopic 3-D display technology provides the users with additional visual information (depth into and out of the screen plane). There have been relatively few studies that have investigated the impact that auditory stimuli have on our perception of visual fidelity in the presence of stereoscopic 3-D. Building on previous work, we examine the effect of auditory stimuli on our perception of visual fidelity within a stereoscopic 3-D environment.
Richard, Alby; Churan, Jan; Whitford, Veronica; O'Driscoll, Gillian A; Titone, Debra; Pack, Christopher C
Corollary discharge signals are found in the nervous systems of many animals, where they serve a large variety of functions related to the integration of sensory and motor signals. In humans, an important corollary discharge signal is generated by oculomotor structures and communicated to sensory systems in concert with the execution of each saccade. This signal is thought to serve a number of purposes related to the maintenance of accurate visual perception. The properties of the oculomotor corollary discharge can be probed by asking subjects to localize stimuli that are flashed briefly around the time of a saccade. The results of such experiments typically reveal large errors in localization. Here, we have exploited these well-known psychophysical effects to assess the potential dysfunction of corollary discharge signals in people with schizophrenia. In a standard perisaccadic localization task, we found that, compared with controls, patients with schizophrenia exhibited larger errors in localizing visual stimuli. The pattern of errors could be modeled as an overdamped corollary discharge signal that encodes instantaneous eye position. The dynamics of this signal predicted symptom severity among patients, suggesting a possible mechanistic basis for widely observed behavioral manifestations of schizophrenia.
Kheradmand, Amir; Gonzalez, Grisel; Otero-Millan, Jorge; Lasker, Adrian
BACKGROUND Perception of upright is often assessed by aligning a luminous line to the subjective visual vertical (SVV). OBJECTIVE Here we investigated the effects of visual line rotation and viewing eye on SVV responses and whether there was any change with head tilt. METHODS SVV was measured using a forced-choice paradigm and by combining the following conditions in 22 healthy subjects: head position (20° left tilt, upright and 20° right tilt), viewing eye (left eye, both eyes and right eye) and direction of visual line rotation (clockwise [CW] and counter clockwise [CCW]). RESULTS The accuracy and precision of SVV responses were not different between the viewing eye conditions in all head positions (P > 0.05, Kruskal-Wallis test). The accuracy of SVV responses was different between the CW and CCW line rotations (p ≈ 0.0001; Kruskal-Wallis test) and SVV was tilted in the same direction as the line rotation. This effect of line rotation was however not consistent across head tilts and was only present in the upright and right tilt head positions. The accuracy of SVV responses showed a higher variability among subjects in the left head tilt position with no significant difference between the CW and CCW line rotations (P > 0.05; post-hoc Dunn’s test). CONCLUSIONS In spite of the challenges to the estimate of upright with head tilt, normal subjects did remarkably well irrespective of the viewing eye. The physiological significance of the asymmetry in the effect of line rotation between the head tilt positions is unclear but it suggests a lateralizing effect of head tilt on the visual perception of upright. PMID:26890421
Etemad, S Ali; Arya, Ali; Parush, Avi
In this study, the notion of additivity in perception of affect from limb motion is investigated. Specifically, we examine whether the impact of multiple limbs in perception of affect is equal to the sum of the impacts of each individual limb. Several neutral, happy, and sad walking sequences are first aligned and averaged. Four distinct body regions or limbs are defined for this study: arms and hands, legs and feet, head and neck, and torso. The three average walks are used to create the stimuli. The motion of each limb and combination of limbs from the neutral sequence are replaced with those of the happy and sad sequences. Through collecting perceptual ratings for when individual limbs contain affective features, and comparing the sums of these ratings to instances where multiple limbs of the body simultaneously contain affective features, additivity is investigated. We find that while the results are highly correlated, additivity does not hold in the classical sense. Based on the results, a mathematical model is proposed for describing the observed relationship.
Třebický, Vít; Fialová, Jitka; Kleisner, Karel; Havlíček, Jan
Static photographs are currently the most often employed stimuli in research on social perception. The method of photograph acquisition might affect the depicted subject's facial appearance and thus also the impression of such stimuli. An important factor influencing the resulting photograph is focal length, as different focal lengths produce various levels of image distortion. Here we tested whether different focal lengths (50, 85, 105 mm) affect depicted shape and perception of female and male faces. We collected three portrait photographs of 45 (22 females, 23 males) participants under standardized conditions and camera setting varying only in the focal length. Subsequently, the three photographs from each individual were shown on screen in a randomized order using a 3-alternative forced-choice paradigm. The images were judged for attractiveness, dominance, and femininity/masculinity by 369 raters (193 females, 176 males). Facial width-to-height ratio (fWHR) was measured from each photograph and overall facial shape was analysed employing geometric morphometric methods (GMM). Our results showed that photographs taken with 50 mm focal length were rated as significantly less feminine/masculine, attractive, and dominant compared to the images taken with longer focal lengths. Further, shorter focal lengths produced faces with smaller fWHR. Subsequent GMM revealed focal length significantly affected overall facial shape of the photographed subjects. Thus methodology of photograph acquisition, focal length in this case, can significantly affect results of studies using photographic stimuli perhaps due to different levels of perspective distortion that influence shapes and proportions of morphological traits.
Zhou, Li; Smith, Derrick W.; Parker, Amy T.; Griffin-Shirley, Nora
This study surveyed teachers of students with visual impairments in Texas on their perceptions of a set of assistive technology competencies developed for teachers of students with visual impairments by Smith and colleagues (2009). Differences in opinion between practicing teachers of students with visual impairments and Smith's group of…
Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko
The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.
Shi, Jiaxin; Huang, Xiting
Previous studies have found that psychological and behavioural functions of the colour red vary according to context. In this research, we used the verbal estimation paradigm to determine if the colour red affects individuals' perception of interval duration. In our results, perceived duration was shorter in a red condition than in a blue one; additionally, only in the red condition, perceived duration was shorter in an online dating context than in an online interviewing context. The contribution and limitations of this study and future research directions are discussed.
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233
Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche
Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718
Brown, Ted; Murdolo, Yuki
The "Developmental Test of Visual Perception-Third Edition" (DTVP-3) is a recent revision of the "Developmental Test of Visual Perception-Second Edition" (DTVP-2). The DTVP-3 is designed to assess the visual perceptual and/or visual-motor integration skills of children from 4 to 12 years of age. The test is standardized using…
Palmisano, Stephen; Kim, Juno
Adding simulated viewpoint jitter or oscillation to displays enhances visually induced illusions of self-motion (vection). The cause of this enhancement is yet to be fully understood. Here, we conducted psychophysical experiments to investigate the effects of different types of simulated oscillation on vertical vection. Observers viewed horizontally oscillating and nonoscillating optic flow fields simulating downward self-motion through an aperture. The aperture was visually simulated to be nearer to the observer and was stationary or oscillating in-phase or counter-phase to the direction of background horizontal oscillations of optic flow. Results showed that vection strength was modulated by the oscillation of the aperture relative to the background optic flow. Vertical vection strength increased as the relative oscillatory horizontal motion between the flow and the aperture increased. However, such increases in vection were only generated when the added oscillations were orthogonal to the principal direction of the optic flow pattern, and not when they occurred in the same direction. The oscillation effects observed in this investigation could not be explained by motion adaptation or different (motion parallax based) effects on depth perception. Instead, these results suggest that the oscillation advantage for vection depends on relative visual motion. PMID:27698982
Mammarella, Irene C; Pazzaglia, Francesca
Visuospatial working memory (VSWM) and visual perception were examined in two groups aged 11-13, one with children displaying symptoms of nonverbal learning disability (NLD) (n = 18) and the other a control group without learning disabilities (n = 18). The two groups were matched for general verbal abilities, age, gender, and socioeconomic level. The children were presented with VSWM tests involving visual and spatial-simultaneous processes, and also with a classical visual illusion, a classical ambiguous figure, as well as visual perception tests specifically devised for the present study. Results revealed that performance of children at risk of NLD was worse than controls in some VSWM and in visual perception tests without memory involvement; these latter required comparisons of visual stimuli and locations in space with distractors. Moreover, the two groups differed in perceiving the classical ambiguous figure. Findings are discussed in the light of both theoretical and clinical implications.
Chebat, J C; Gelinas-Chebat, C; Filiatrault, P
This study explores the interactive effects of musical and visual cues on time perception in a specific situation, that of waiting in a bank. Videotapes are employed to stimulate the situation; a 2 x 3 factorial design (N = 427) is used: 2 (high vs low) amounts of visual information and 2 (fast vs slow) levels of musical tempo in addition to a no-music condition. Two mediating variables are tested in the relation between the independent variables (musical and visual ones) and the dependent variable (perceived waiting time), mood and attention. Results of multivariate analysis of variance and a system of simultaneous equations show that musical cues and visual cues have no symmetrical effects: the musical tempo has a global (moderating) effect on the whole structure of the relations between dependent, independent, and mediating variables but has no direct influence on time perception. The visual cues affect time perception, the significance of which depends on musical tempo. Also, the "Resource Allocation Model of Time Estimation" predicts the attention-time relation better than Ornstein's "storage-size theory." Mood state serves as a substitute for time information with slow music, but its effects are cancelled with fast music.
Classen, Claudia; Kibele, Armin
Visually perceived motion can affect observers' motor control in such a way that an intended action can be activated automatically when it contains similar spatial features. So far, effects have been mostly demonstrated with simple displays where objects were moving in a two-dimensional plane. However, almost all actions we perform and visually perceive in everyday life are much more complex and take place in three-dimensional space. The purpose of this study was to examine action inductions due to visual perception of motion in depth. Therefore, we conducted two Simon experiments where subjects were presented with video displays of a sphere (simple displays, experiment 1) and a real person (complex displays, experiment 2) moving in depth. In both experiments, motion direction towards and away from the observer served as task irrelevant information whereas a color change in the video served as relevant information to choose the correct response (close or far positioned response key). The results show that subjects reacted faster when motion direction of the dynamic stimulus was corresponding to the spatial position of the demanded response. In conclusion, this direction-based Simon effect is modulated by spatial position information, higher sensitivity of our visual system for looming objects, and a high salience of objects being on a collision course.
Třebický, Vít; Fialová, Jitka; Kleisner, Karel; Havlíček, Jan
Static photographs are currently the most often employed stimuli in research on social perception. The method of photograph acquisition might affect the depicted subject’s facial appearance and thus also the impression of such stimuli. An important factor influencing the resulting photograph is focal length, as different focal lengths produce various levels of image distortion. Here we tested whether different focal lengths (50, 85, 105 mm) affect depicted shape and perception of female and male faces. We collected three portrait photographs of 45 (22 females, 23 males) participants under standardized conditions and camera setting varying only in the focal length. Subsequently, the three photographs from each individual were shown on screen in a randomized order using a 3-alternative forced-choice paradigm. The images were judged for attractiveness, dominance, and femininity/masculinity by 369 raters (193 females, 176 males). Facial width-to-height ratio (fWHR) was measured from each photograph and overall facial shape was analysed employing geometric morphometric methods (GMM). Our results showed that photographs taken with 50 mm focal length were rated as significantly less feminine/masculine, attractive, and dominant compared to the images taken with longer focal lengths. Further, shorter focal lengths produced faces with smaller fWHR. Subsequent GMM revealed focal length significantly affected overall facial shape of the photographed subjects. Thus methodology of photograph acquisition, focal length in this case, can significantly affect results of studies using photographic stimuli perhaps due to different levels of perspective distortion that influence shapes and proportions of morphological traits. PMID:26894832
Spröte, Patrick; Schmidt, Filipp; Fleming, Roland W.
One of the main functions of vision is to represent object shape. Most theories of shape perception focus exclusively on geometrical computations (e.g., curvatures, symmetries, axis structure). Here, however, we find that shape representations are also profoundly influenced by an object’s causal origins: the processes in its past that formed it. Observers placed dots on objects to report their perceived symmetry axes. When objects appeared ‘complete’—created entirely by a single generative process—responses closely approximated the object’s geometrical axes. However, when objects appeared ‘bitten’—as if parts had been removed by a distinct causal process—the responses deviated significantly from the geometrical axes, as if the bitten regions were suppressed from the computation of symmetry. This suppression of bitten regions was also found when observers were not asked about symmetry axes but about the perceived front and back of objects. The findings suggest that visual shape representations are more sophisticated than previously appreciated. Objects are not only parsed according to what features they have, but also to how or why they have those features. PMID:27824094
Šikl, Radovan; Šimeček, Michal
The main purpose of this study was to determine the effect of the depth description levels required in experimental tasks on visual space perception. Six observers assessed the locations of 11 posts by determining a distance ranking order, comparing the distances between posts with a reference unit, and estimating the absolute distances between the posts. The experiments were performed in an open outdoor field under normal daylight conditions with posts at distances ranging from 2 to 12 m. To directly assess and compare the observers' perceptual performance in all three phases of the experiment, the raw data were transformed to common measurement levels. A pairwise comparison analysis provided nonmetric information regarding the observers' relative distance judgments, and a multidimensional-scaling procedure provided metric information regarding the relationship between a perceived spatial layout and the layout of the actual scene. The common finding in all of the analyses was that the precision and consistency of the observers' ordinal distance judgments were greater than those of their ratio distance judgments, which were, in turn, greater than the precision and consistency of their absolute-magnitude distance judgments. Our findings raise questions regarding the ecological validity of standard experimental tasks.
Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B; Franklin, Anna
The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d') and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.
Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna
The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain. PMID:27023274
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Leopold, David A.; Humphreys, Glyn W.; Welchman, Andrew E.
The posterior parietal cortex (PPC) is understood to be active when observers perceive three-dimensional (3D) structure. However, it is not clear how central this activity is in the construction of 3D spatial representations. Here, we examine whether PPC is essential for two aspects of visual depth perception by testing patients with lesions affecting this region. First, we measured subjects' ability to discriminate depth structure in various 3D surfaces and objects using binocular disparity. Patients with lesions to right PPC (N = 3) exhibited marked perceptual deficits on these tasks, whereas those with left hemisphere lesions (N = 2) were able to reliably discriminate depth as accurately as control subjects. Second, we presented an ambiguous 3D stimulus defined by structure from motion to determine whether PPC lesions influence the rate of bistable perceptual alternations. Patients' percept durations for the 3D stimulus were generally within a normal range, although the two patients with bilateral PPC lesions showed the fastest perceptual alternation rates in our sample. Intermittent stimulus presentation reduced the reversal rate similarly across subjects. Together, the results suggest that PPC plays a causal role in both inferring and maintaining the perception of 3D structure with stereopsis supported primarily by the right hemisphere, but do not lend support to the view that PPC is a critical contributor to bistable perceptual alternations. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269606
Murphy, Aidan P; Leopold, David A; Humphreys, Glyn W; Welchman, Andrew E
The posterior parietal cortex (PPC) is understood to be active when observers perceive three-dimensional (3D) structure. However, it is not clear how central this activity is in the construction of 3D spatial representations. Here, we examine whether PPC is essential for two aspects of visual depth perception by testing patients with lesions affecting this region. First, we measured subjects' ability to discriminate depth structure in various 3D surfaces and objects using binocular disparity. Patients with lesions to right PPC (N = 3) exhibited marked perceptual deficits on these tasks, whereas those with left hemisphere lesions (N = 2) were able to reliably discriminate depth as accurately as control subjects. Second, we presented an ambiguous 3D stimulus defined by structure from motion to determine whether PPC lesions influence the rate of bistable perceptual alternations. Patients' percept durations for the 3D stimulus were generally within a normal range, although the two patients with bilateral PPC lesions showed the fastest perceptual alternation rates in our sample. Intermittent stimulus presentation reduced the reversal rate similarly across subjects. Together, the results suggest that PPC plays a causal role in both inferring and maintaining the perception of 3D structure with stereopsis supported primarily by the right hemisphere, but do not lend support to the view that PPC is a critical contributor to bistable perceptual alternations.This article is part of the themed issue 'Vision in our three-dimensional world'.
This chapter describes three examples of using illusions to teach visual perception. The illusions present ways for students to change their perspective regarding how their eyes work and also offer opportunities to question assumptions regarding their approach to knowledge.
Aruga, Reiko; Saito, Hideo; Ando, Hideyuki; Watanabe, Junji
There is considerable evidence that, when visual stimuli are presented around the time of a saccade, spatial and temporal perceptions of them are distorted. However, only a small number of previous studies have addressed the perception of a visual image induced by a saccade eye movement (visual image that is dynamically drawn on the retina during a saccade at the speed of the eye movement). Here we investigated three-dimensional and temporal perceptions of the saccade-induced images and found that perceptual grouping of objects has a significant effect on the perceived depth and timing of the images.
Gal, Hagar; Linchevski, Liora
In this paper, we consider theories about processes of visual perception and perception-based knowledge representation (VPR) in order to explain difficulties encountered in figural processing in junior high school geometry tasks. In order to analyze such difficulties, we take advantage of the following perspectives of VPR: (1) Perceptual…
Lee, Kyung Myun; Barrett, Karen Chan; Kim, Yeonhwa; Lim, Yeoeun; Lee, Kyogu
Dance and music often co-occur as evidenced when viewing choreographed dances or singers moving while performing. This study investigated how the viewing of dance motions shapes sound perception. Previous research has shown that dance reflects the temporal structure of its accompanying music, communicating musical meter (i.e. a hierarchical organization of beats) via coordinated movement patterns that indicate where strong and weak beats occur. Experiments here investigated the effects of dance cues on meter perception, hypothesizing that dance could embody the musical meter, thereby shaping participant reaction times (RTs) to sound targets occurring at different metrical positions.In experiment 1, participants viewed a video with dance choreography indicating 4/4 meter (dance condition) or a series of color changes repeated in sequences of four to indicate 4/4 meter (picture condition). A sound track accompanied these videos and participants reacted to timbre targets at different metrical positions. Participants had the slowest RT's at the strongest beats in the dance condition only. In experiment 2, participants viewed the choreography of the horse-riding dance from Psy's "Gangnam Style" in order to examine how a familiar dance might affect meter perception. Moreover, participants in this experiment were divided into a group with experience dancing this choreography and a group without experience. Results again showed slower RTs to stronger metrical positions and the group with experience demonstrated a more refined perception of metrical hierarchy. Results likely stem from the temporally selective division of attention between auditory and visual domains. This study has implications for understanding: 1) the impact of splitting attention among different sensory modalities, and 2) the impact of embodiment, on perception of musical meter. Viewing dance may interfere with sound processing, particularly at critical metrical positions, but embodied familiarity with
Zeng, Zhihong; Pantic, Maja; Roisman, Glenn I; Huang, Thomas S
Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.
Segawa, Kaori; Ujike, Hiroyasu; Okajima, Katsunori; Saida, Shinya
We investigated the effects that the visual field has on the perception of heading speed. The stimulus was a radial flow pattern simulating a translational motion through a cylindrical tunnel. Observers evaluated the perception of heading speed by using a temporal two-alternative forced choice (2AFC) staircase method. In the first experiment, we manipulated the stimulus area by cutting the visual field along the longitudinal direction. The results showed that the perceived heading speed increases with the stimulus area. In the second experiment, we manipulated both the stimulus area and the eccentricity by cutting the visual field along the longitudinal direction. The results showed that the perception of heading speed increases when the stimulus occupies a large portion of the peripheral visual field. These findings suggest that the effect of eccentricity is a consequence of an incorrect translation of two-dimensional visual information into three-dimensional scaling.
ecological optics self-motion perception visual simulation 05 08 egomotion visual proprioception optical flow visual psychophysics 19 46STRACT... stairs was perceived relative to the observer’s leg length, while Hallford (1984) found that the perceived * graspability of tiles was a direct...Psychology Laboratory. 182 JV. VI Warren, W. H. (1984). Perceiving affordances: Visual guidance of stair c 1ir-bing. Journal of Experimental
Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.
Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial
Hazan, Valerie; Sennema, Anke; Faulkner, Andrew; Ortega-Llebaria, Marta; Iba, Midori; Chunge, Hyunsong
This study assessed the extent to which second-language learners are sensitive to phonetic information contained in visual cues when identifying a non-native phonemic contrast. In experiment 1, Spanish and Japanese learners of English were tested on their perception of a labial/ labiodental consonant contrast in audio (A), visual (V), and audio-visual (AV) modalities. Spanish students showed better performance overall, and much greater sensitivity to visual cues than Japanese students. Both learner groups achieved higher scores in the AV than in the A test condition, thus showing evidence of audio-visual benefit. Experiment 2 examined the perception of the less visually-salient /1/-/r/ contrast in Japanese and Korean learners of English. Korean learners obtained much higher scores in auditory and audio-visual conditions than in the visual condition, while Japanese learners generally performed poorly in both modalities. Neither group showed evidence of audio-visual benefit. These results show the impact of the language background of the learner and visual salience of the contrast on the use of visual cues for a non-native contrast. Significant correlations between scores in the auditory and visual conditions suggest that increasing auditory proficiency in identifying a non-native contrast is linked with an increasing proficiency in using visual cues to the contrast.
Haase, Claudia M.; Silbereisen, Rainer K.
Affective influences may play a key role in adolescent risk taking, but have rarely been studied. Using an audiovisual method of affect induction, two experimental studies examined the effect of positive affect on risk perceptions in adolescence and young adulthood. Outcomes were risk perceptions regarding drinking alcohol, smoking a cigarette,…
Sotirakis, H.; Kyvelidou, A.; Mademli, L.; Stergiou, N.
Postural tracking of visual motion cues improves perception–action coupling in aging, yet the nature of the visual cues to be tracked is critical for the efficacy of such a paradigm. We investigated how well healthy older (72.45 ± 4.72 years) and young (22.98 ± 2.9 years) adults can follow with their gaze and posture horizontally moving visual target cues of different degree of complexity. Participants tracked continuously for 120 s the motion of a visual target (dot) that oscillated in three different patterns: a simple periodic (simulated by a sine), a more complex (simulated by the Lorenz attractor that is deterministic displaying mathematical chaos) and an ultra-complex random (simulated by surrogating the Lorenz attractor) pattern. The degree of coupling between performance (posture and gaze) and the target motion was quantified in the spectral coherence, gain, phase and cross-approximate entropy (cross-ApEn) between signals. Sway–target coherence decreased as a function of target complexity and was lower for the older compared to the young participants when tracking the chaotic target. On the other hand, gaze–target coherence was not affected by either target complexity or age. Yet, a lower cross-ApEn value when tracking the chaotic stimulus motion revealed a more synchronous gaze–target relationship for both age groups. Results suggest limitations in online visuo-motor processing of complex motion cues and a less efficient exploitation of the body sway dynamics with age. Complex visual motion cues may provide a suitable training stimulus to improve visuo-motor integration and restore sway variability in older adults. PMID:27126061
Perkins, Kara; Columna, Luis; Lieberman, Lauren; Bailey, JoEllen
Introduction: Ongoing communication with parents and the acknowledgment of their preferences and expectations are crucial to promote the participation of physical activity by children with visual impairments. Purpose: The study presented here explored parents' perceptions of physical activity for their children with visual impairments and explored…
Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde
In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…
Contemporary research findings in the fields of perceptual psychology and neurology of the human brain that are directly related to the study of visual communication are reviewed and briefly discussed in this paper. Specifically, the paper identifies those major research findings in visual perception that are relevant to the study of visual…
de Graaf, Tom A; Koivisto, Mika; Jacobs, Christianne; Sack, Alexander T
Transcranial magnetic stimulation (TMS) continues to deliver on its promise as a research tool. In this review article we focus on the application of TMS to early visual cortex (V1, V2, V3) in studies of visual perception and visual awareness. Depending on the asynchrony between visual stimulus onset and TMS pulse (SOA), TMS can suppress visual perception, allowing one to track the time course of functional relevance (chronometry) of early visual cortex for vision. This procedure has revealed multiple masking effects ('dips'), some consistently (∼+100ms SOA) but others less so (∼-50ms, ∼-20ms, ∼+30ms, ∼+200ms SOA). We review the state of TMS masking research, focusing on the evidence for these multiple dips, the relevance of several experimental parameters to the obtained 'masking curve', and the use of multiple measures of visual processing (subjective measures of awareness, objective discrimination tasks, priming effects). Lastly, we consider possible future directions for this field. We conclude that while TMS masking has yielded many fundamental insights into the chronometry of visual perception already, much remains unknown. Not only are there several temporal windows when TMS pulses can induce visual suppression, even the well-established 'classical' masking effect (∼+100ms) may reflect more than one functional visual process.
Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer
In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.
Fesi, Jeremy D; Mendola, Janine D
The dissociation of a figure from its background is an essential feat of visual perception, as it allows us to detect, recognize, and interact with shapes and objects in our environment. In order to understand how the human brain gives rise to the perception of figures, we here review experiments that explore the links between activity in visual cortex and performance of perceptual tasks related to figure perception. We organize our review according to a proposed model that attempts to contextualize figure processing within the more general framework of object processing in the brain. Overall, the current literature provides us with individual linking hypotheses as to cortical regions that are necessary for particular tasks related to figure perception. Attempts to reach a more complete understanding of how the brain instantiates figure and object perception, however, will have to consider the temporal interaction between the many regions involved, the details of which may vary widely across different tasks.
Ebisch, Sjoerd J. H.; Salone, Anatolia; Martinotti, Giovanni; Carlucci, Leonardo; Mantini, Dante; Perrucci, Mauro G.; Saggino, Aristide; Romani, Gian Luca; Di Giannantonio, Massimo; Northoff, Georg; Gallese, Vittorio
Social perception commonly employs multiple sources of information. The present study aimed at investigating the integrative processing of affective social signals. Task-related and task-free functional magnetic resonance imaging was performed in 26 healthy adult participants during a social perception task concerning dynamic visual stimuli simultaneously depicting facial expressions of emotion and tactile sensations that could be either congruent or incongruent. Confounding effects due to affective valence, inhibitory top–down influences, cross-modal integration, and conflict processing were minimized. The results showed that the perception of congruent, compared to incongruent stimuli, elicited enhanced neural activity in a set of brain regions including left amygdala, bilateral posterior cingulate cortex (PCC), and left superior parietal cortex. These congruency effects did not differ as a function of emotion or sensation. A complementary task-related functional interaction analysis preliminarily suggested that amygdala activity depended on previous processing stages in fusiform gyrus and PCC. The findings provide support for the integrative processing of social information about others’ feelings from manifold bodily sources (sensory-affective information) in amygdala and PCC. Given that the congruent stimuli were also judged as being more self-related and more familiar in terms of personal experience in an independent sample of participants, we speculate that such integrative processing might be mediated by the linking of external stimuli with self-experience. Finally, the prediction of task-related responses in amygdala by intrinsic functional connectivity between amygdala and PCC during a task-free state implies a neuro-functional basis for an individual predisposition for the integrative processing of social stimulus content. PMID:27242474
Jarvis, G Eric
This article explored the origins and implications of the underdiagnosis of affective disorders in African-Americans. MEDLINE and old collections were searched using relevant key words. Reference lists from the articles that were gathered from this procedure were reviewed. The historical record indicated that the psychiatric perception of African-Americans with affective disorders changed significantly during the last 200 years. In the antebellum period, the mental disorders of slaves mostly went unnoticed. By the early 20th century, African-Americans were reported to have high rates of manic-depressive disorder compared with whites. By the mid-century, rates of manic-depressive disorder in African-Americans plummeted, whereas depression remained virtually nonexistent. In recent decades, diagnosed depression and bipolar disorder, whether in clinical or research settings, were inexplicably low in African-Americans compared with whites. Given these findings, American psychiatry needs to appraise the deep-seated effects of historical stereotypes on the diagnosis and treatment of African-Americans.
Hagan, Cindy C; Woods, Will; Johnson, Sam; Green, Gary G R; Young, Andrew W
Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.
Matsuno, Toyomi; Fujita, Kazuo
Studies on the visual processing of primates, which have well developed visual systems, provide essential information about the perceptual bases of their higher-order cognitive abilities. Although the mechanisms underlying visual processing are largely shared between human and nonhuman primates, differences have also been reported. In this article, we review psychophysical investigations comparing the basic visual processing that operates in human and nonhuman species, and discuss the future contributions potentially deriving from such comparative psychophysical approaches to primate minds.
Zaidel, Adam; Goin-Kochel, Robin P; Angelaki, Dora E
Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual-vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information.
Pineo, Daniel; Ware, Colin
We present a method for automatically evaluating and optimizing visualizations using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and primary visual cortex. The neural activity resulting from viewing flow visualizations is simulated and evaluated to produce a metric of visualization effectiveness. Visualization optimization is achieved by applying this effectiveness metric as the utility function in a hill-climbing algorithm. We apply this method to the evaluation and optimization of 2D flow visualizations, using two visualization parameterizations: streaklet-based and pixel-based. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment. It had been previously hypothesized the effectiveness of head-to-tail alignment results from the perceptual processing of the visual system, but this theory had not been computationally modeled. A second optimization using a pixel-based parameterization resulted in a LIC-like result. The implications in terms of the selection of primitives is discussed. We argue that computational models can be used for optimizing complex visualizations. In addition, we argue that they can provide a means of computationally evaluating perceptual theories of visualization, and as a method for quality control of display methods.
Bensmaïa, S. J.; Killebrew, J. H.; Craig, J. C.
Subjects were presented with pairs of tactile drifting sinusoids and made speed discrimination judgments. On some trials, a visual drifting sinusoid, which subjects were instructed to ignore, was presented simultaneously with one of the two tactile stimuli. When the visual and tactile gratings drifted in the same direction (i.e., from left to right), the visual distractors were found to increase the perceived speed of the tactile gratings. The effect of the visual distractors was proportional to their temporal frequency but not to their perceived speed. When the visual and tactile gratings drifted in opposite directions, the distracting effect of the visual distractors was either substantially reduced or, in some cases, reversed (i.e., the distractors slowed the perceived speed of the tactile gratings). This result suggests that the observed visual-tactile interaction is dependent on motion and not simply on the oscillations inherent in drifting sinusoids. Finally, we find that disrupting the temporal synchrony between the visual and tactile stimuli eliminates the distracting effect of the visual stimulus. We interpret this latter finding as evidence that the observed visual-tactile interaction operates at the sensory level and does not simply reflect a response bias. PMID:16723415
Almerigogna, Jehanne; Ost, James; Akehurst, Lucy; Fluck, Mike
We conducted two studies to examine how interviewers' nonverbal behaviors affect children's perceptions and suggestibility. In the first study, 42 8- to 10-year-olds watched video clips showing an interviewer displaying combinations of supportive and nonsupportive nonverbal behaviors and were asked to rate the interviewer on six attributes (e.g., friendliness, strictness). Smiling received high ratings on the positive attributes (i.e., friendly, helpful, and sincere), and fidgeting received high ratings on the negative attributes (i.e., strict, bored, and stressed). For the second study, 86 8- to 10-year-olds participated in a learning activity about the vocal chords. One week later, they were interviewed individually about the activity by an interviewer adopting either the supportive (i.e., smiling) or nonsupportive (i.e., fidgeting) behavior. Children questioned by the nonsupportive interviewer were less accurate and more likely to falsely report having been touched than were those questioned by the supportive interviewer. Children questioned by the supportive interviewer were also more likely to say that they did not know an answer than were children questioned by the nonsupportive interviewer. Participants in both conditions gave more correct answers to questions about central, as opposed to peripheral, details of the activity. Implications of these findings for the appropriate interviewing of child witnesses are discussed.
Pyo, Katrina A.
A review of the nursing literature reveals many undergraduate nursing students lack proficiency with basic mathematical skills, those necessary for safe medication preparation and administration. Few studies exploring the phenomenon from the undergraduate nursing student perspective are reported in the nursing literature. The purpose of this study was to explore undergraduate nursing students’ perceptions of math abilities, factors that affect math abilities, the use of math in nursing, and the extent to which specific math skills were addressed throughout a nursing curriculum. Polya’s Model for Problem Solving and the Bloom’s Taxonomy of Educational Objectives, Affective Domain served as the theoretical background for the study. Qualitative and quantitative methods were utilized to obtain data from a purposive sample of undergraduate nursing students from a private university in western Pennsylvania. Participants were selected based on the proficiency level with math skills, as determined by a score on the Elsevier’s HESI™ Admission Assessment (A2) Exam, Math Portion. Ten students from the “Excellent” benchmark group and eleven students from the “Needing Additional Assistance or Improvement” benchmark group participated in one-on-one, semi-structured interviews, and completed a 25-item, 4-point Likert scale survey that rated confidence levels with specific math skills and the extent to which these skills were perceived to be addressed in the nursing curriculum. Responses from the two benchmark groups were compared and contrasted. Eight themes emerged from the qualitative data. Findings related to mathematical approach and confidence levels with specific math skills were determined to be statistically significant.
Sun, Wei; Fu, Qiang; Zhang, Chao; Manohar, Senthilvelan; Kumaraguru, Anand; Li, Ji
Tinnitus and hyperacusis, commonly seen in adults, are also reported in children. Although clinical studies found children with tinnitus and hyperacusis often suffered from recurrent otitis media, there is no direct study on how temporary hearing loss in the early age affects the sound loudness perception. In this study, sound loudness changes in rats affected by perforation of the tympanic membranes (TM) have been studied using an operant conditioning based behavioral task. We detected significant increases of sound loudness and susceptibility to audiogenic seizures (AGS) in rats with bilateral TM damage at postnatal 16 days. As increase to sound sensitivity is commonly seen in hyperacusis and tinnitus patients, these results suggest that early age hearing loss is a high risk factor to induce tinnitus and hyperacusis in children. In the TM damaged rats, we also detected a reduced expression of GABA receptor δ and α6 subunits in the inferior colliculus (IC) compared to the controls. Treatment of vigabatrin (60 mg/kg/day, 7-14 days), an anti-seizure drug that inhibits the catabolism of GABA, not only blocked AGS, but also significantly attenuated the loudness response. Administration of vigabatrin following the early age TM damage could even prevent rats from developing AGS. These results suggest that TM damage at an early age may cause a permanent reduction of GABA tonic inhibition which is critical towards the maintenance of normal loudness processing of the IC. Increasing GABA concentration during the critical period may alleviate the impairment in the brain induced by early age hearing loss.
Eskelund, Kasper; MacDonald, Ewen N; Andersen, Tobias S
We perceive identity, expression and speech from faces. While perception of identity and expression depends crucially on the configuration of facial features it is less clear whether this holds for visual speech perception. Facial configuration is poorly perceived for upside-down faces as demonstrated by the Thatcher illusion in which the orientation of the eyes and mouth with respect to the face is inverted (Thatcherization). This gives the face a grotesque appearance but this is only seen when the face is upright. Thatcherization can likewise disrupt visual speech perception but only when the face is upright indicating that facial configuration can be important for visual speech perception. This effect can propagate to auditory speech perception through audiovisual integration so that Thatcherization disrupts the McGurk illusion in which visual speech perception alters perception of an incongruent acoustic phoneme. This is known as the McThatcher effect. Here we show that the McThatcher effect is reflected in the McGurk mismatch negativity (MMN). The MMN is an event-related potential elicited by a change in auditory perception. The McGurk-MMN can be elicited by a change in auditory perception due to the McGurk illusion without any change in the acoustic stimulus. We found that Thatcherization disrupted a strong McGurk illusion and a correspondingly strong McGurk-MMN only for upright faces. This confirms that facial configuration can be important for audiovisual speech perception. For inverted faces we found a weaker McGurk illusion but, surprisingly, no MMN. We also found no correlation between the strength of the McGurk illusion and the amplitude of the McGurk-MMN. We suggest that this may be due to a threshold effect so that a strong McGurk illusion is required to elicit the McGurk-MMN.
Kim, Ko-Un; Kim, Su-Han; An, Tae-Gyu
[Purpose] The purpose of this study was to examine the effects of transcranial direct current stimulation (tDCS) on visual perception and performance of activities of daily living in patients with stroke. [Subjects and Methods] Thirty subjects were assigned equally to a tDCS plus traditional occupational therapy group (experimental group) and a traditional occupational therapy group (control group). The intervention was implemented five times per week, 30 minutes each, for six weeks. In order to assess visual perception function before and after the intervention, the motor-free visual perception test (MVPT) was conducted, and in order to compare the performance of activities of daily living, the Functional Independence Measure scale was employed. [Results] According to the results, both groups improved in visual perception function and in performance of activities of daily living. Although there was no significant difference between the two groups, the experimental group exhibited higher scores. [Conclusion] In conclusion, the application of tDCS for the rehabilitation of patients with stroke may positively affect their visual perception and ability to perform activities of daily living.
Kim, Ko-Un; Kim, Soo-Han; An, Tae-Gyu
[Purpose] The purpose of this study was to examine the effects of transcranial direct current stimulation (tDCS) on visual perception and performance of activities of daily living in patients with stroke. [Subjects and Methods] Thirty subjects were assigned equally to a tDCS plus traditional occupational therapy group (experimental group) and a traditional occupational therapy group (control group). The intervention was implemented five times per week, 30 minutes each, for six weeks. In order to assess visual perception function before and after the intervention, the motor-free visual perception test (MVPT) was conducted, and in order to compare the performance of activities of daily living, the Functional Independence Measure scale was employed. [Results] According to the results, both groups improved in visual perception function and in performance of activities of daily living. Although there was no significant difference between the two groups, the experimental group exhibited higher scores. [Conclusion] In conclusion, the application of tDCS for the rehabilitation of patients with stroke may positively affect their visual perception and ability to perform activities of daily living. PMID:27799697
Taddei-Ferretti, C; Radilova, J; Musio, C; Santillo, S; Cibelli, E; Cotugno, A; Radil, T
Spontaneous figure reversal of ambiguous patterns was analyzed in humans. A) With Necker-"cube"-like, or "drum"-like figures, having square or round shaped "front" and "rear" surfaces, and either large or small "depth", the perceptual intervals corresponding to both interpretations of "drum" were longer than those of "cube"; the perceived "depth" of the figures was less relevant for reversal timing (inter-reversal intervals were only slightly longer for the "deeper" figures). Although the shape of "front" and "rear" surfaces is not a crucial geometrical feature for figure reversal, it did influence its timing. More, or longer information-processing steps should probably be needed for perceptual representations of curvilinear patterns in comparison with rectangular ones. The underlying neural mechanisms are probably located at a relatively peripheral level in the visual system. B) With a modified Necker "cube"-like figure, having the two internal vertices coincident, and the long axis of the figure aligned horizontally, the effect of voluntary control on perception-reversal timing overcomes opposite effects due to either fixation-attention to pattern's focal zones, or subliminal stimulation by the pattern's biased versions, suggesting one or the other perception's possibility, while it is enhanced by concordant imagery. Voluntary control should intervene downward at a high-level processing, and should probably affect both a decision-making and a perception-stabilizing mechanism in the process of the pattern's unconscious interpretation. Results A and B are confronted with other results on both perceptual and binocular rivalry of up-to-date literature, in the frame of discussions on low-level bottom-up automatic stimulus-driven processing vs high-level top-down covert attention-driven processing.
Forneris, Tanya; Danish, Steven J.; Fries, Elizabeth
Goals for Health was a National Cancer Institute funded program designed to impact health behaviors of adolescents living in rural Virginia and New York. This study examined three specific objectives: (a) to examine participants' perceptions of the program components and the relationship between program components and overall program perception,…
Reavis, Eric A; Lee, Junghee; Wynn, Jonathan K; Narr, Katherine L; Njau, Stephanie N; Engel, Stephen A; Green, Michael F
People with schizophrenia typically show visual processing deficits on masking tasks and other performance-based measures, while people with bipolar disorder may have related deficits. The etiology of these deficits is not well understood. Most neuroscientific studies of perception in schizophrenia and bipolar disorder have focused on visual processing areas in the cerebral cortex, but perception also depends on earlier components of the visual system that few studies have examined in these disorders. Using diffusion weighted imaging (DWI), we investigated the structure of the primary sensory input pathway to the cortical visual system: the optic radiations. We used probabilistic tractography to identify the optic radiations in 32 patients with schizophrenia, 31 patients with bipolar disorder, and 30 healthy controls. The same participants also performed a visual masking task outside the scanner. We characterized the optic radiations with three structural measures: fractional anisotropy, mean diffusivity, and tract volume. We did not find significant differences in those structural measures across groups. However, we did find a significant correlation between the volume of the optic radiations and visual masking thresholds that was unique to the schizophrenia group and explained variance in masking performance above and beyond that previously accounted for by differences in visual cortex. Thus, individual differences in the volume of the optic radiations explained more variance in visual masking performance in the schizophrenia group than the bipolar or control groups. This suggests that individual differences in the structure of the subcortical visual system have an important influence on visual processing in schizophrenia.
Ogawa, Akitoshi; Macaluso, Emiliano
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) "matched vs. unmatched" conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio-visual "congruent vs. incongruent" between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio-visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio-visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio-visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices.
Kawachi, Yousuke; Grove, Philip M; Sakurai, Kenzo
We aimed to show that a single auditory tone crossmodally affects multiple visual events using a multiple stream/bounce display (SBD), consisting of two disk pairs moving toward each other at equal speeds, coinciding, and then moving apart in a two-dimensional (2-D) display. The temporal offsets were manipulated between the coincidences of the disk pairs (0 to ±240 ms) by staggering motion onset between the pairs. A tone was presented at the coincidence timing of one of the disk pairs on half of the trials. Participants judged whether the disks in each of two pairs appeared to stream through or bounce off each other. Results show that a tone presented at either of the disk pairs' coincidence points promoted bouncing percepts in both disk pairs compared to no-tone trials. Perceived bouncing persisted in the disk-pair whose coincidence was offset 60 ms before and up to more than 120 ms after the audiovisual coincidence timing of the other disk-pair. The temporal window of bounce promotion was comparable to that obtained with a conventional SBD. The interaction of a single auditory event and multiple visual events was also modulated by the kind of experimental task (the stream/bounce or simultaneity judgments). These findings suggest that, using a single auditory cue, the perceptual system resolves the ambiguity of the motion of multiple disk pairs presented within the conventional temporal window of crossmodal interaction.
Shrager, Yael; Gold, Jeffrey J.; Hopkins, Ramona O.; Squire, Larry R.
A recent proposal that structures of the medial temporal lobe support visual perception in addition to memory challenges the long-standing idea that the ability to acquire new memories is separable from other cognitive and perceptual functions. In four experiments, we have put this proposal to a rigorous test. Six memory-impaired patients with well characterized lesions of either the hippocampal region or the hippocampal region plus additional medial temporal lobe structures were assessed on difficult tests of visual perceptual discrimination. Across all four experiments, the patients performed as well as controls. The results show that visual perception is intact in memory-impaired patients with damage to the medial temporal lobe even when perception is assessed with challenging tasks. Furthermore, the results support the principle that the ability to acquire new memories is a distinct cerebral function, dissociable from other perceptual and cognitive functions. PMID:16495450
Josef, Noam; Mann, Ofri; Sykes, António V; Fiorito, Graziano; Reis, João; Maccusker, Steven; Shashar, Nadav
Studies concerning the perceptual processes of animals are not only interesting, but are fundamental to the understanding of other developments in information processing among non-humans. Carefully used visual illusions have been proven to be an informative tool for understanding visual perception. In this behavioral study, we demonstrate that cuttlefish are responsive to visual cues involving texture gradients. Specifically, 12 out of 14 animals avoided swimming over a solid surface with a gradient picture that to humans resembles an illusionary crevasse, while only 5 out of 14 avoided a non-illusionary texture. Since texture gradients are well-known cues for depth perception in vertebrates, we suggest that these cephalopods were responding to the depth illusion created by the texture density gradient. Density gradients and relative densities are key features in distance perception in vertebrates. Our results suggest that they are fundamental features of vision in general, appearing also in cephalopods.
Houtkamp, Roos; Roelfsema, Pieter R.
The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some…
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…
In recent years, a new scientific field known as network science has been emerging. Network science is concerned with understanding the structure and properties of networks. One concept that is commonly used in describing a network is how the nodes in the network cluster together. The current research applied the idea of clustering to the study of how phonological neighbors influence visual word recognition. The results of 2 experiments converge to show that words with neighbors that are highly clustered (i.e., are closely related in terms of sound) are recognized more slowly than are those having neighbors that are less clustered. This result is explained in terms of the principles of interactive activation where the interplay between phoneme and phonological word units is affected by the neighborhood structure of the word. It is argued that neighbors in more clustered neighborhoods become more active and directly compete with the target word, thereby slowing processing.
Salomon, R; van Elk, M; Aspell, J E; Blanke, O
Recent studies have shown the importance of integrating multisensory information in the body representation for constituting self-consciousness. However, one idea that has received only scant attention is that our body representation is also constituted by knowledge of bodily visual characteristics (i.e. 'what I look like'). Here in two experiments we used a full body crossmodal congruency task in which visual distractors were presented on a photograph of the participant, another person, who was either familiar or unfamiliar, or an object. Results revealed that during the 'self-condition' CCEs were enhanced compared to the 'other condition'. The CCE was similar for unfamiliar and familiar others. CCEs for the object condition were significantly smaller. The results show that presentation of an irrelevant image of a body affects multimodal processing and that the effect is enhanced when that image is of the self. The results hold intriguing implications for body representation in social situations.
Suárez Coalla, Paz; Cuetos Vega, Fernando
Several studies have shown that a phonological deficit is the origin of developmental dyslexia, because dyslexics have important difficulties in mapping orthographic to phonological codes. However, visual criteria are still used for the diagnosis of dyslexia and to develop methods of intervention. This study attempts to determine whether there are visual problems in dyslexic children. To this aim, dyslexic children and children without reading difficulties, matched by chronological age, participated in two experiments. One study was based on the Reversal test and the other was a visual decision task in which participants had to decide whether two letters were the same or different. There were 40 pairs of letters, to measure reaction times and mistakes. The results showed that dyslexics had similar performance to controls in the detection of different visual stimuli. Developmental dyslexics do not appear to have visual perceptual problems, but a particular difficulty to retrieve the phonological code of graphemes.
Johnston, J. C.; Pashler, H.
The binding of identity and location information in disjunctive feature search was studied. Ss searched a heterogeneous display for a color or a form target, and reported both target identity and location. To avoid better than chance guessing of target identity (by choosing the target less likely to have been seen), the difficulty of the two targets was equalized adaptively; a mathematical model was used to quantify residual effects. A spatial layout was used that minimized postperceptual errors in reporting location. Results showed strong binding of identity and location perception. After correction for guessing, no perception of identity without location was found. A weak trend was found for accurate perception of target location without identity. We propose that activated features generate attention-calling "interrupt" signals, specifying only location; attention then retrieves the properties at that location.
Jordan, Timothy R; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B
When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Jordan, Timothy R.; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B.
When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed. PMID:25032950
The visual art which is commented by the visual art teachers to help processing of the visual culture is important. In this study it is tried to describe the effect of visual culture based on the usual aesthetic experiences to be included in the learning process art education. The action research design, which is a qualitative study, is conducted…
Bicevskis, Katie; Derrick, Donald; Gick, Bryan
Audio-visual [McGurk and MacDonald (1976). Nature 264, 746-748] and audio-tactile [Gick and Derrick (2009). Nature 462(7272), 502-504] speech stimuli enhance speech perception over audio stimuli alone. In addition, multimodal speech stimuli form an asymmetric window of integration that is consistent with the relative speeds of the various signals [Munhall, Gribble, Sacco, and Ward (1996). Percept. Psychophys. 58(3), 351-362; Gick, Ikegami, and Derrick (2010). J. Acoust. Soc. Am. 128(5), EL342-EL346]. In this experiment, participants were presented video of faces producing /pa/ and /ba/ syllables, both alone and with air puffs occurring synchronously and at different timings up to 300 ms before and after the stop release. Perceivers were asked to identify the syllable they perceived, and were more likely to respond that they perceived /pa/ when air puffs were present, with asymmetrical preference for puffs following the video signal-consistent with the relative speeds of visual and air puff signals. The results demonstrate that visual-tactile integration of speech perception occurs much as it does with audio-visual and audio-tactile stimuli. This finding contributes to the understanding of multimodal speech perception, lending support to the idea that speech is not perceived as an audio signal that is supplemented by information from other modes, but rather that primitives of speech perception are, in principle, modality neutral.
Gick, Bryan; Jóhannsdóttir, Kristín M.; Gibraiel, Diana; Mühlbauer, Jeff
A single pool of untrained subjects was tested for interactions across two bimodal perception conditions: audio-tactile, in which subjects heard and felt speech, and visual-tactile, in which subjects saw and felt speech. Identifications of English obstruent consonants were compared in bimodal and no-tactile baseline conditions. Results indicate that tactile information enhances speech perception by about 10 percent, regardless of which other mode (auditory or visual) is active. However, within-subject analysis indicates that individual subjects who benefit more from tactile information in one cross-modal condition tend to benefit less from tactile information in the other. PMID:18396924
Gick, Bryan; Jóhannsdóttir, Kristín M; Gibraiel, Diana; Mühlbauer, Jeff
A single pool of untrained subjects was tested for interactions across two bimodal perception conditions: audio-tactile, in which subjects heard and felt speech, and visual-tactile, in which subjects saw and felt speech. Identifications of English obstruent consonants were compared in bimodal and no-tactile baseline conditions. Results indicate that tactile information enhances speech perception by about 10 percent, regardless of which other mode (auditory or visual) is active. However, within-subject analysis indicates that individual subjects who benefit more from tactile information in one cross-modal condition tend to benefit less from tactile information in the other.
Hosono, Yuki; Kitaoka, Kazuyoshi; Urushihara, Ryo; Séi, Hiroyoshi; Kinouchi, Yohsuke
It has been reported that negative emotional changes and conditions affect the visual faculties of humans at the neural level. On the other hand, the effects of emotion on color perception in particular, which are based on evoked potentials, are unknown. In the present study, we investigated whether different anxiety levels affect the color information processing for each of 3 wavelengths by using flash visual evoked potentials (FVEPs) and State-Trait Anxiety Inventory. In results, significant positive correlations were observed between FVEP amplitudes and state or trait anxiety scores in the long (sensed as red) and middle (sensed as green) wavelengths. On the other hand, short-wavelength-evoked FVEPs were not correlated with anxiety level. Our results suggest that negative emotional conditions may affect color sense processing in humans.
Woods, Adam J; Mennemeier, Mark; Garcia-Rill, Edgar; Huitt, Tiffany; Chelette, Kenneth C; McCullough, Gary; Munn, Tiffany; Brown, Ginger; Kiser, Thomas S
The relationship between arousal, perception, and visual neglect was examined in this case study. Cold pressor stimulation (CPS: immersing the foot in iced water) was used to manipulate arousal and to determine its effects on contralesional neglect, perception of stimulus intensity (magnitude estimation), reaction time, and an electrophysiological correlate of ascending reticular activating system activity (i.e., the P50 potential). Measures that normalized from baseline following CPS included contralesional neglect on a clock drawing test, perception of stimulus magnitude, and P50 amplitude. The P50 amplitude returned to its abnormally low baseline level 20 min after CPS ended, indicating that CPS increased arousal.
Study, Nancy E.
Compares results of Successive Perception Test I (SPT) for the study population of freshman engineering students to their results on the group-administered Purdue Spatial Visualization Test: Visualization of Rotations (PSVT) and the individually administered Haptic Visual Discrimination Test (HVDT). Concludes that either visual and haptic…
Sadikaj, Gentiana; Moskowitz, D S; Russell, Jennifer J; Zuroff, David C; Paris, Joel
We examined how the amplification of 3 within-person processes (behavioral reactivity to interpersonal perceptions, affect reactivity to interpersonal perceptions, and behavioral reactivity to a person's own affect) accounts for greater quarrelsome behavior among individuals with borderline personality disorder (BPD). Using an event-contingent recording (ECR) methodology, individuals with BPD (N = 38) and community controls (N = 31) reported on their negative affect, quarrelsome behavior, and perceptions of the interaction partner's agreeable-quarrelsome behavior in interpersonal events during a 20-day period. Behavioral reactivity to negative affect was similar in both groups. However, behavioral reactivity and affect reactivity to interpersonal perceptions were elevated in individuals with BPD relative to community controls; specifically, individuals with BPD reported more quarrelsome behavior and more negative affect during interactions in which they perceived others as more cold-quarrelsome. Greater negative affect reactivity to perceptions of other's cold-quarrelsome behavior partly accounted for the increased quarrelsome behavior reported by individuals with BPD during these interactions. This pattern of results suggests a cycle in which the perception of cold-quarrelsome behavior in others triggers elevated negative affect and quarrelsome behavior in individuals with BPD, which subsequently led to more quarrelsome behavior from their interaction partners, which leads to perceptions of others as cold-quarrelsomeness, which begins the cycle anew.
Lopez, Christophe; Bachofner, Christelle; Mercier, Manuel; Blanke, Olaf
Since human behavior and perception have evolved within the Earth's gravitational field, humans possess an internal model of gravity. Although gravity is known to influence the visual perception of moving objects, the evidence is less clear concerning the visual perception of static objects. We investigated whether a visual judgment of the stability of human body postures (static postures of a human standing on a platform and tilted in the roll plane) may also be influenced by gravity and by the participant's orientation. Pictures of human body postures were presented in different orientations with respect to gravity and the participant's body. The participant's body was aligned to gravity (upright) or not (lying on one side). Participants performed stability judgments with respect to the platform, imagining that gravity operates in the direction indicated by the platform (that was or was not concordant with physical gravity). Such visual judgments were influenced by the picture's orientation with respect to physical gravity. When pictures were tilted by 90 degrees with respect to physical gravity, the human postures that were tilted toward physical gravity (down) were perceived as more unstable than similar postures tilted away from physical gravity (up). Stability judgments were also influenced by the picture's orientation with respect to the participant's body. This indicates that gravity and the participant's body position may influence the visual perception of static objects.
Cai, Tingting; Zhu, Huilin; Xu, Jie; Wu, Shijing; Li, Xinge; He, Sailing
Functional near-infrared spectroscopy (fNIRS) was adopted to investigate the cortical neural correlates of visual fatigue during binocular depth perception for different disparities (from 0.1° to 1.5°). By using a slow event-related paradigm, the oxyhaemoglobin (HbO) responses to fused binocular stimuli presented by the random-dot stereogram (RDS) were recorded over the whole visual dorsal area. To extract from an HbO curve the characteristics that are correlated with subjective experiences of stereopsis and visual fatigue, we proposed a novel method to fit the time-course HbO curve with various response functions which could reflect various processes of binocular depth perception. Our results indicate that the parietal-occipital cortices are spatially correlated with binocular depth perception and that the process of depth perception includes two steps, associated with generating and sustaining stereovision. Visual fatigue is caused mainly by generating stereovision, while the amplitude of the haemodynamic response corresponding to sustaining stereovision is correlated with stereopsis. Combining statistical parameter analysis and the fitted time-course analysis, fNIRS could be a promising method to study visual fatigue and possibly other multi-process neural bases. PMID:28207899
Background. Understanding the functions of different brain areas has represented a major endeavor of neurosciences. Historically, brain functions have been associated with specific cortical brain areas; however, modern neuroimaging developments suggest cognitive functions are associated to networks rather than to areas. Objectives. The purpose of this paper was to analyze the connectivity of Brodmann area (BA) 37 (posterior, inferior, and temporal/fusiform gyrus) in relation to (1) language and (2) visual processing. Methods. Two meta-analyses were initially conducted (first level analysis). The first one was intended to assess the language network in which BA37 is involved. The second one was intended to assess the visual perception network. A third meta-analysis (second level analysis) was then performed to assess contrasts and convergence between the two cognitive domains (language and visual perception). The DataBase of Brainmap was used. Results. Our results support the role of BA37 in language but by means of a distinct network from the network that supports its second most important function: visual perception. Conclusion. It was concluded that left BA37 is a common node of two distinct networks—visual recognition (perception) and semantic language functions. PMID:25648869
Cai, Tingting; Zhu, Huilin; Xu, Jie; Wu, Shijing; Li, Xinge; He, Sailing
Functional near-infrared spectroscopy (fNIRS) was adopted to investigate the cortical neural correlates of visual fatigue during binocular depth perception for different disparities (from 0.1° to 1.5°). By using a slow event-related paradigm, the oxyhaemoglobin (HbO) responses to fused binocular stimuli presented by the random-dot stereogram (RDS) were recorded over the whole visual dorsal area. To extract from an HbO curve the characteristics that are correlated with subjective experiences of stereopsis and visual fatigue, we proposed a novel method to fit the time-course HbO curve with various response functions which could reflect various processes of binocular depth perception. Our results indicate that the parietal-occipital cortices are spatially correlated with binocular depth perception and that the process of depth perception includes two steps, associated with generating and sustaining stereovision. Visual fatigue is caused mainly by generating stereovision, while the amplitude of the haemodynamic response corresponding to sustaining stereovision is correlated with stereopsis. Combining statistical parameter analysis and the fitted time-course analysis, fNIRS could be a promising method to study visual fatigue and possibly other multi-process neural bases.
Iarocci, Grace; Rombough, Adrienne; Yager, Jodi; Weeks, Daniel J.; Chua, Romeo
The bimodal perception of speech sounds was examined in children with autism as compared to mental age--matched typically developing (TD) children. A computer task was employed wherein only the mouth region of the face was displayed and children reported what they heard or saw when presented with consonant-vowel sounds in unimodal auditory…
In "The Renaissance Rediscovery of Linear Perspective," one of Samuel Edgerton's claims is that Filippo Brunelleschi and his contemporaries did not develop a three-dimensional style of representing the world in painting as much as they reappropriated a way to depict the natural world in painting that most mirrored the human perception of it.…
The model of sentence perception proposed by Fodor, Bever and Garrett (1974) emphasizes the importance of grammatical cues signalling clause boundaries, and suggests that segmentation of a sentence into clauses precedes computation of the internal structure of those clauses. However, this model has nothing to say about the many sentences in which…
McTigue, Erin M.; Flowers, Amanda C.
Constructing meaning from science texts relies not only on comprehending the words but also the diagrams and other graphics. The goal of this study was to explore elementary students' perceptions of science diagrams and their skills related to diagram interpretation. 30 students, ranging from second grade through middle school, completed a diagram…
Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W; Spence, C
Multisensory integration (MSI) and exogenous spatial attention can both speedup responses to perceptual events. Recently, it has been shown that audiovisual integration at exogenously attended locations is reduced relative to unattended locations. This effect was observed at short cue-target intervals (200-250 ms). At longer intervals, however, the initial benefits of exogenous shifts of spatial attention at the cued location are often replaced by response time (RT) costs (also known as Inhibition of Return, IOR). Given these opposing cueing effects at shorter versus longer intervals, we decided to investigate whether MSI would also be affected by IOR. Uninformative exogenous visual spatial cues were presented between 350 and 450 ms prior to the onset of auditory, visual, and audiovisual targets. As expected, IOR was observed for visual targets (invalid cue RT < valid cue RT). For auditory and audiovisual targets, neither IOR nor any spatial cueing effects were observed. The amount of relative multisensory response enhancement and race model inequality violation was larger for uncued as compared with cued locations indicating that IOR reduces MSI. The results are discussed in the context of changes in unisensory signal strength at cued as compared with uncued locations.
Klein, Sheryl; Guiltner, Val; Sollereder, Patti; Cui, Ying
Occupational therapists assess fine motor, visual motor, visual perception, and visual skill development, but knowledge of the relationships between scores on sensorimotor performance measures and handwriting legibility and speed is limited. Ninety-nine students in grades three to six with learning and/or behavior problems completed the Upper-Limb Speed and Dexterity Subtest of the Bruininks-Oseretsky Test of Motor Proficiency, the Beery-Buktenica Developmental Test of Visual-Motor Integration-5th Edition, the Test of Visual Perceptual Skills-Revised, the Visual Skills Appraisal, and a handwriting copying task. Correlations between sensorimotor performance scores and handwriting legibility varied from .07 to .38. Correlations between sensorimotor performance scores and handwriting speed varied from .04 to .42. Stepwise multiple regression analysis indicated that the variance in handwriting explained by these measures was ≤ 20% for legibility and ≤ 26% for speed. On the basis of multivariate analysis of variance only scores for the Developmental Test of Visual-Motor Integration differed between students classified as "skilled" and "unskilled" handwriters. The low magnitude of the correlations and variance explained by the sensorimotor performance measures supports the need for occupational therapists to consider additional factors that may impact handwriting of students with learning and/or behavior problems.
Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.
Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…
Braden, Roberts A., Ed.; And Others
These proceedings contain 37 papers from 51 authors noted for their expertise in the field of visual literacy. The collection is divided into three sections: (1) "Examining Visual Literacy" (including, in addition to a 7-year International Visual Literacy Association bibliography covering the period from 1983-1989, papers on the perception of…
Mereu, Stefania; Zacks, Jeffrey M; Kurby, Christopher A; Lleras, Alejandro
Recent studies of rapid resumption-an observer's ability to quickly resume a visual search after an interruption-suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur.
Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro
Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440
Noy, N; Bickel, S; Zion-Golumbic, E; Harel, M; Golan, T; Davidesco, I; Schevon, C A; McKhann, G M; Goodman, R R; Schroeder, C E; Mehta, A D; Malach, R
Despite extensive research, the spatiotemporal span of neuronal activations associated with the emergence of a conscious percept is still debated. The debate can be formulated in the context of local vs. global models, emphasizing local activity in visual cortex vs. a global fronto-parietal "workspace" as the key mechanisms of conscious visual perception. These alternative models lead to differential predictions with regard to the precise magnitude, timing and anatomical spread of neuronal activity during conscious perception. Here we aimed to test a specific aspect of these predictions in which local and global models appear to differ - namely the extent to which fronto-parietal regions modulate their activity during task performance under similar perceptual states. So far the main experimental results relevant to this debate have been obtained from non-invasive methods and led to conflicting interpretations. Here we examined these alternative predictions through large-scale intracranial measurements (Electrocorticogram - ECoG) in 43 patients and 4445 recording sites. Both ERP and broadband high frequency (50-150 Hz - BHF) responses were examined through the entire cortex during a simple 1-back visual recognition memory task. Our results reveal short latency intense visual responses, localized first in early visual cortex followed (at ∼200 ms) by higher order visual areas, but failed to show significant delayed (300 ms) visual activations. By contrast, oddball image repeat events, linked to overt motor responses, were associated with a significant increase in a delayed (300 ms) peak of BHF power in fronto-parietal cortex. Comparing BHF responses with ERP revealed an additional peak in the ERP response - having a similar latency to the well-studied P3 scalp EEG response. Posterior and temporal regions demonstrated robust visual category selectivity. An unexpected observation was that high-order visual cortex responses were essentially concurrent (at ∼200 ms
Bae, Sung-Ho; Kim, Munchurl
Computational models for image quality assessment (IQA) have been developed by exploring effective features that are consistent with the characteristics of human visual system (HVS) for visual quality perception. In this paper, we firstly reveal that many existing features used in computational IQA methods can hardly characterize visual quality perception for local image characteristics and various distortion types. To solve this problem, we propose a new IQA method, called Structural Contrast-Quality Index (SC-QI) by adopting a structural contrast index (SCI) which can well characterize local and global visual quality perceptions for various image characteristics with structural-distortion types. In addition to SCI, we devise some other perceptually important features for our SC-QI that can effectively reflect the characteristics of HVS for contrast sensitivity and chrominance component variation. Furthermore, we develop a modified SC-QI, called structural contrast distortion metric (SC-DM) which inherits desirable mathematical properties of valid distance metricability and quasi-convexity. So, it can effectively be used as a distance metric for image quality optimization problems. Extensive experimental results show that both SC-QI and SC-DM can very well characterize the HVS's properties of visual quality perception for local image characteristics and various distortion types, which is a distinctive merit of our methods compared to other IQA methods. As a result, both SC-QI and SC-DM have better performances with a strong consilience of global and local visual quality perception as well as with much lower computation complexity, compared to state-of-the-art IQA methods. The MATLAB source codes of the proposed SC-QI and SC-DM are publicly available online at https://sites.google.com/site/sunghobaecv/iqa.
Bae, Sung-Ho; Kim, Munchurl
Computational models for image quality assessment (IQA) have been developed by exploring effective features that are consistent with the characteristics of a human visual system (HVS) for visual quality perception. In this paper, we first reveal that many existing features used in computational IQA methods can hardly characterize visual quality perception for local image characteristics and various distortion types. To solve this problem, we propose a new IQA method, called the structural contrast-quality index (SC-QI), by adopting a structural contrast index (SCI), which can well characterize local and global visual quality perceptions for various image characteristics with structural-distortion types. In addition to SCI, we devise some other perceptually important features for our SC-QI that can effectively reflect the characteristics of HVS for contrast sensitivity and chrominance component variation. Furthermore, we develop a modified SC-QI, called structural contrast distortion metric (SC-DM), which inherits desirable mathematical properties of valid distance metricability and quasi-convexity. So, it can effectively be used as a distance metric for image quality optimization problems. Extensive experimental results show that both SC-QI and SC-DM can very well characterize the HVS's properties of visual quality perception for local image characteristics and various distortion types, which is a distinctive merit of our methods compared with other IQA methods. As a result, both SC-QI and SC-DM have better performances with a strong consilience of global and local visual quality perception as well as with much lower computation complexity, compared with the state-of-the-art IQA methods. The MATLAB source codes of the proposed SC-QI and SC-DM are publicly available online at https://sites.google.com/site/sunghobaecv/iqa.
Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…
There has been a great interest in the recent years in visual coordination and target tracking for mobile robots cooperating in unstructured environments. This paper describes visual servo control techniques suitable for intelligent task planning of cooperative robots operating in unstructured environment. In this paper, we have considered a team of semi-autonomous robots controlled by a remote supervisory control system. We have presented an algorithm for visual position tracking of individual cooperative robots within their working environment. Initially, we present a technique suitable for visual servoing of a robot toward its landmark targets. Secondly, we present an image-processing technique that utilizes images from a remote surveillance camera for localization of the robots within the operational environment. In this algorithm, the surveillance camera can be either stationary or mobile. The supervisor control system keeps tracks of relative locations of individual robots and utilizes relative coordinate information of the robots to plan their cooperative activities. We presented some results of this research effort that illustrates effectiveness of the proposed algorithms for cooperative robotic systems visual team working and target tracking.
Sutherland, Clare A M; Thut, Gregor; Romei, Vincenzo
Rapidly approaching (looming) sounds are ecologically salient stimuli that are perceived as nearer than they are due to overestimation of their loudness change and underestimation of their distance (Neuhoff, 1998; Seifritz et al., 2002). Despite evidence for crossmodal influence by looming sounds onto visual areas (Romei, Murray, Cappe, & Thut, 2009, 2013; Tyll et al., 2013), it is unknown whether such sounds bias visual percepts in similar ways. Nearer objects appear to be larger and brighter than distant objects. If looming sounds impact visual processing, then visual stimuli paired with looming sounds should be perceived as brighter and larger, even when the visual stimuli do not provide motion cues, i.e. are static. In Experiment 1 we found that static visual objects paired with looming tones (but not static or receding tones) were perceived as larger and brighter than their actual physical properties, as if they appear closer to the observer. In a second experiment, we replicate and extend the findings of Experiment 1. Crucially, we did not find evidence of such bias by looming sounds when visual processing was disrupted via masking or when catch trials were presented, ruling out simple response bias. Finally, in a third experiment we found that looming tones do not bias visual stimulus characteristics that do not carry visual depth information such as shape, providing further evidence that they specifically impact in-depth visual processing. We conclude that looming sounds impact visual perception through a mechanism transferring in-depth sound motion information onto the relevant in-depth visual dimensions (such as size and luminance but not shape) in a crossmodal remapping of information for a genuine, evolutionary advantage in stimulus detection.
Shourie, Nasrin; Firoozabadi, Mohammad; Badie, Kambiz
In this paper, differences between multichannel EEG signals of artists and nonartists were analyzed during visual perception and mental imagery of some paintings and at resting condition using approximate entropy (ApEn). It was found that ApEn is significantly higher for artists during the visual perception and the mental imagery in the frontal lobe, suggesting that artists process more information during these conditions. It was also observed that ApEn decreases for the two groups during the visual perception due to increasing mental load; however, their variation patterns are different. This difference may be used for measuring progress in novice artists. In addition, it was found that ApEn is significantly lower during the visual perception than the mental imagery in some of the channels, suggesting that visual perception task requires more cerebral efforts. PMID:25133180
Hack, Zarita Caplan; Erber, Norman P.
Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…
Rosa Salva, Orsola; Sovrano, Valeria Anna; Vallortigara, Giorgio
Fish are a complex taxonomic group, whose diversity and distance from other vertebrates well suits the comparative investigation of brain and behavior: in fish species we observe substantial differences with respect to the telencephalic organization of other vertebrates and an astonishing variety in the development and complexity of pallial structures. We will concentrate on the contribution of research on fish behavioral biology for the understanding of the evolution of the visual system. We shall review evidence concerning perceptual effects that reflect fundamental principles of the visual system functioning, highlighting the similarities and differences between distant fish groups and with other vertebrates. We will focus on perceptual effects reflecting some of the main tasks that the visual system must attain. In particular, we will deal with subjective contours and optical illusions, invariance effects, second order motion and biological motion and, finally, perceptual binding of object properties in a unified higher level representation.
Bhalla, M.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
In 4 experiments, it was shown that hills appear steeper to people who are encumbered by wearing a heavy backpack (Experiment 1), are fatigued (Experiment 2), are of low physical fitness (Experiment 3), or are elderly and/or in declining health (Experiment 4). Visually guided actions are unaffected by these manipulations of physiological potential. Although dissociable, the awareness and action systems were also shown to be interconnected. Recalibration of the transformation relating awareness and actions was found to occur over long-term changes in physiological potential (fitness level, age, and health) but not with transitory changes (fatigue and load). Findings are discussed in terms of a time-dependent coordination between the separate systems that control explicit visual awareness and visually guided action.
Rosa Salva, Orsola; Sovrano, Valeria Anna; Vallortigara, Giorgio
Fish are a complex taxonomic group, whose diversity and distance from other vertebrates well suits the comparative investigation of brain and behavior: in fish species we observe substantial differences with respect to the telencephalic organization of other vertebrates and an astonishing variety in the development and complexity of pallial structures. We will concentrate on the contribution of research on fish behavioral biology for the understanding of the evolution of the visual system. We shall review evidence concerning perceptual effects that reflect fundamental principles of the visual system functioning, highlighting the similarities and differences between distant fish groups and with other vertebrates. We will focus on perceptual effects reflecting some of the main tasks that the visual system must attain. In particular, we will deal with subjective contours and optical illusions, invariance effects, second order motion and biological motion and, finally, perceptual binding of object properties in a unified higher level representation. PMID:25324728
Vadakkan, Kunjumon I
Perception is a first-person internal sensation induced within the nervous system at the time of arrival of sensory stimuli from objects in the environment. Lack of access to the first-person properties has limited viewing perception as an emergent property and it is currently being studied using third-person observed findings from various levels. One feasible approach to understand its mechanism is to build a hypothesis for the specific conditions and required circuit features of the nodal points where the mechanistic operation of perception take place for one type of sensation in one species and to verify it for the presence of comparable circuit properties for perceiving a different sensation in a different species. The present work explains visual perception in mammalian nervous system from a first-person frame of reference and provides explanations for the homogeneity of perception of visual stimuli above flicker fusion frequency, the perception of objects at locations different from their actual position, the smooth pursuit and saccadic eye movements, the perception of object borders, and perception of pressure phosphenes. Using results from temporal resolution studies and the known details of visual cortical circuitry, explanations are provided for (a) the perception of rapidly changing visual stimuli, (b) how the perception of objects occurs in the correct orientation even though, according to the third-person view, activity from the visual stimulus reaches the cortices in an inverted manner and (c) the functional significance of well-conserved columnar organization of the visual cortex. A comparable circuitry detected in a different nervous system in a remote species-the olfactory circuitry of the fruit fly Drosophila melanogaster-provides an opportunity to explore circuit functions using genetic manipulations, which, along with high-resolution microscopic techniques and lipid membrane interaction studies, will be able to verify the structure
Background Different complex systems behave in a similar way near their critical points of phase transitions which leads to an emergence of a universal scaling behaviour. Universality indirectly implies a long-range correlation between constituent subsystems. As the distributed correlated processing is a hallmark of higher complex cognition, I investigated a measure of universality in human brain during perception and mental imagery of complex real-life visual object like visual art. Methodology/Principal Findings A new method was presented to estimate the strength of hidden universal structure in a multivariate data set. In this study, I investigated this method in the electrical activities (electroencephalogram signals) of human brain during complex cognition. Two broad groups - artists and non-artists - were studied during the encoding (perception) and retrieval (mental imagery) phases of actual paintings. Universal structure was found to be stronger in visual imagery than in visual perception, and this difference was stronger in artists than in non-artists. Further, this effect was found to be largest in the theta band oscillations and over the prefrontal regions bilaterally. Conclusions/Significance Phase transition like dynamics was observed in the electrical activities of human brain during complex cognitive processing, and closeness to phase transition was higher in mental imagery than in real perception. Further, the effect of long-term training on the universal scaling was also demonstrated. PMID:19122817
Zebehazy, Kim T.; Wilton, Adam P.
Introduction: This study analyzed the responses of a survey of students with visual impairments in Canada and the United States about their use of tactile and print graphics. Demographic, Likert scale, and open-ended questions focused on perceptions of quality, preferences, instruction, and strategies. Methods: Percentages of agreement for tactile…
Brennan, Susan A.; Luze, Gayle J.; Peterson, Carla
This survey explored the emergent literacy experiences that parents provided for their children with visual impairments, aged 1-8, as well as the parents' perceptions of the professional support that they received to facilitate these activities. The results indicated that the parents and children engaged in reading, singing songs, and writing or…
Memis, Aysel; Sivri, Diler Ayvaz
In this study, primary school first grade students' reading skills and visual perception levels were investigated. Sample of the study, which was designed with relational scanning model, consisted of 168 first grade students studying at three public primary schools in Kozlu, Zonguldak, in 2013-2014 education year. Students' reading level, reading…
Wallbrown, Jane D.; And Others
The intent of this study was to determine whether the Minnesota Percepto-Diagnostic Test (Fuller, 1969; Fuller & Laird, 1963) is more effective than the Bender-Gestalt (Bender, 1937) with respect to identifying achievement-related errors in visual-motor perception. (Author/RK)
Mamah, Vincent; Deku, Prosper; Darling, Sharon M.; Avoke, Selete K.
This study was undertaken to examine the university teachers' perception of including students with Visual Impairment (VI) in the public universities of Ghana. The sample consisted of 110 teachers from the University of Cape Coast (UCC), the University of Education, Winneba, (UEW), and the University of Ghana (UG). Data were collected through…
Darker, Iain T.; Jordan, Timothy R.
The findings of previous investigations into word perception in the upper and the lower visual field (VF) are variable and may have incurred non-perceptual biases caused by the asymmetric distribution of information within a word, an advantage for saccadic eye-movements to targets in the upper VF and the possibility that stimuli were not projected…
Nesterik, Ella V.; Issina, Gaukhar I.; Pecherskikh, Taliya F.; Belikova, Oxana V.
The article is devoted to the subjective perception of time, or psychological time, as a text category and a literary image. It focuses on the visual images that are characteristic of different types of literary time--accelerated, decelerated and frozen (vanished). The research is based on the assumption that the category of subjective perception…
Munoz-Ruata, J.; Caro-Martinez, E.; Perez, L. Martinez; Borja, M.
Background: Perception disorders are frequently observed in persons with intellectual disability (ID) and their influence on cognition has been discussed. The objective of this study is to clarify the mechanisms behind these alterations by analysing the visual event related potentials early component, the N1 wave, which is related to perception…
Chiang, Shyh-Bao; Sun, Chun-Wang
This research looked into the effect of how cognitive development toward imagery is formed through visual perception by means of a quantitative questionnaire. The main variable was the difference between the learning backgrounds of the interviewees. A two-way ANOVA mixed design was the statistical method used for the analysis of the 2 × 4 (2 by 4)…
DuBosque, Richard Stanborough
The widespread integration of the computer into the mainstream of daily life presents a challenge to various sectors of society, and the incorporation of this technology into the realm of the older individual with visual impairments is a relatively uncharted field of study. This study was undertaken to acquire the perceptions of the impact of the…
Examines significant teacher behavior which positively affects students' perceptions of the teacher-student relationship. Identifies specific self-disclosive statements which students attributed to good and/or poor teachers. Finds a significant relationship between teachers' self-disclosive statements and students' perceptions. (MM)
Cheung, Phoebe P P; Poon, Magdelene Y C; Leung, Macy; Wong, Rosanna
This research study intended to investigate the visualperceptual performance of children in Hong Kong by comparing them to the accepted norms on the Developmental Test of Visual Perception-2nd edition. The research examined whether there was significant difference in child's gender, age, and grade. The normative study recruited two hundred and eight-nine children between the ages of 6 and 7 in normal primary schools in Hong Kong. Results indicated that there was a ceiling effect in eye-hand coordination, position in space and spatial relations subtests. Grade differences were found to be significant in all subtests except eye-hand coordination and visual-motor speed. On the other hand, there were no statistical difference in the test scores between boys and girls except on copying and figure-ground subtests. It is concluded that there is a strong need to ensure that norms for visual-perceptual tests are appropriate for the specific cultural groups being assessed.
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.
Walker-Andrews, Arlene S.; Lennon, Elizabeth M.
Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…
Mazzarella, Elisabetta; Ramsey, Richard; Conson, Massimiliano; Hamilton, Antonia
Taking another person's viewpoint and making sense of their actions are key processes that guide social behavior. Previous neuroimaging investigations have largely studied these processes separately. The current study used functional magnetic resonance imaging to examine how the brain incorporates another person's viewpoint and actions into visual perspective judgments. Participants made a left-right judgment about the location of a target object from their own (egocentric) or an actor's visual perspective (altercentric). Actor location varied around a table and the actor was either reaching or not reaching for the target object. Analyses examined brain regions engaged in the egocentric and altercentric tasks, brain regions where response magnitude tracked the orientation of the actor in the scene and brain regions sensitive to the action performed by the actor. The blood oxygen level-dependent (BOLD) response in dorsomedial prefrontal cortex (dmPFC) was sensitive to actor orientation in the altercentric task, whereas the response in right inferior frontal gyrus (IFG) was sensitive to actor orientation in the egocentric task. Thus, dmPFC and right IFG may play distinct but complementary roles in visual perspective taking (VPT). Observation of a reaching actor compared to a non-reaching actor yielded activation in lateral occipitotemporal cortex, regardless of task, showing that these regions are sensitive to body posture independent of social context. By considering how an observed actor's location and action influence the neural bases of visual perspective judgments, the current study supports the view that multiple neurocognitive "routes" operate during VPT.
This study compared the performance on perspective-taking tasks of 8 congenitally blind children (mean age 13.5 years), using either haptic exploration or a vibrotactile prosthetic device, with the performance of 4 children having low vision using their limited visual abilities. The vibrotactile device improved perspective-taking performance…
Gori, Monica; Giuliana, Luana; Sandini, Giulio; Burr, David
It is still unclear how the visual system perceives accurately the size of objects at different distances. One suggestion, dating back to Berkeley's famous essay, is that vision is calibrated by touch. If so, we may expect different mechanisms involved for near, reachable distances and far, unreachable distances. To study how the haptic system…
Knowland, Victoria C. P.; Evans, Sam; Snell, Caroline; Rosen, Stuart
Purpose: The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. Method: In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with…
Couch, Richard A.
Background information about gender inequity is provided, and the assertion is made that educators must recognize that many of the problems females encounter are begun and perpetuated in the schools. Visual literacy is part of the change that schools must make in order to make greater strides toward gender equity. Two connections between visual…
Anthamatten, Peter; Wee, Bryan Shao-Chang; Korris, Erin
Objective: A great deal of scholarly work has examined the way that physical, social and cultural environments relate to children's health behaviour, particularly with respect to diet and exercise. While this work is critical, little research attempts to incorporate the views and perspectives of children themselves using visual methodologies.…
Sanchez, D.; Chamorro-Martinez, J.; Vila, M. A.
Discussion of multimedia libraries and the need for storage, indexing, and retrieval techniques focuses on the combination of computer vision and data mining techniques to model high-level concepts for image retrieval based on perceptual features of the human visual system. Uses fuzzy set theory to measure users' assessments and to capture users'…
Duncum, Anna J. F.; Mundy, Matthew E.
The body image concern (BIC) continuum ranges from a healthy and positive body image, to clinical diagnoses of abnormal body image, like body dysmorphic disorder (BDD). BDD and non-clinical, yet high-BIC participants have demonstrated a local visual processing bias, characterised by reduced inversion effects. To examine whether this bias is a potential marker of BDD, the visual processing of individuals across the entire BIC continuum was examined. Dysmorphic Concern Questionnaire (DCQ; quantified BIC) scores were expected to correlate with higher discrimination accuracy and faster reaction times of inverted stimuli, indicating reduced inversion effects (occurring due to increased local visual processing). Additionally, an induced global or local processing bias via Navon stimulus presentation was expected to alter these associations. Seventy-four participants completed the DCQ and upright-inverted face and body stimulus discrimination task. Moderate positive associations were revealed between DCQ scores and accuracy rates for inverted face and body stimuli, indicating a graded local bias accompanying increases in BIC. This relationship supports a local processing bias as a marker for BDD, which has significant assessment implications. Furthermore, a moderate negative relationship was found between DCQ score and inverted face accuracy after inducing global processing, indicating the processing bias can temporarily be reversed in high BIC individuals. Navon stimuli were successfully able to alter the visual processing of individuals across the BIC continuum, which has important implications for treating BDD. PMID:27003715
Yousif, Nada; Fu, Richard Z.; Abou-El-Ela Bourquin, Bilal; Bhrugubanda, Vamsee; Schultz, Simon R.
When processing sensory signals, the brain must account for noise, both noise in the stimulus and that arising from within its own neuronal circuitry. Dopamine receptor activation is known to enhance both visual cortical signal-to-noise-ratio (SNR) and visual perceptual performance; however, it is unknown whether these two dopamine-mediated phenomena are linked. To assess this, we used single-pulse transcranial magnetic stimulation (TMS) applied to visual cortical area V5/MT to reduce the SNR focally and thus disrupt visual motion discrimination performance to visual targets located in the same retinotopic space. The hypothesis that dopamine receptor activation enhances perceptual performance by improving cortical SNR predicts that dopamine activation should antagonize TMS disruption of visual perception. We assessed this hypothesis via a double-blinded, placebo-controlled study with the dopamine receptor agonists cabergoline (a D2 agonist) and pergolide (a D1/D2 agonist) administered in separate sessions (separated by 2 weeks) in 12 healthy volunteers in a William's balance-order design. TMS degraded visual motion perception when the evoked phosphene and the visual stimulus overlapped in time and space in the placebo and cabergoline conditions, but not in the pergolide condition. This suggests that dopamine D1 or combined D1 and D2 receptor activation enhances cortical SNR to boost perceptual performance. That local visual cortical excitability was unchanged across drug conditions suggests the involvement of long-range intracortical interactions in this D1 effect. Because increased internal noise (and thus lower SNR) can impair visual perceptual learning, improving visual cortical SNR via D1/D2 agonist therapy may be useful in boosting rehabilitation programs involving visual perceptual training. SIGNIFICANCE STATEMENT In this study, we address the issue of whether dopamine activation improves visual perception despite increasing sensory noise in the visual cortex
Janpol, Henry L.; Dilts, Rachel
This research explored whether viewing documentary films about the natural or built environment can exert a measurable influence on behaviors and perceptions. Different documentary films were viewed by subjects. One film emphasized the natural environment, while the other focused on the built environment. After viewing a film, a computer game…
Boerma, Inouk E.; Mol, Suzanne E.; Jolles, Jelle
The aim of this study was to examine the relationship between teacher perceptions and children's reading motivation, with specific attention to gender differences. The reading self-concept, task value, and attitude of 160 fifth and sixth graders were measured. Teachers rated each student's reading comprehension. Results showed that for boys,…
Creel, Sarah C.
Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was…
Howard, Sarah K.
Educational change, such as technology integration, involves risk. Teachers are encouraged to "take risks", but what risks they are asked to take and how do they perceive these risks? Developing an understanding of teachers' technology-related risk perceptions can help explain their choices and behaviours. This paper presents a way to…
Knowles, Kristen K; Little, Anthony C
In recent years, the perception of social traits in faces and voices has received much attention. Facial and vocal masculinity are linked to perceptions of trustworthiness; however, while feminine faces are generally considered to be trustworthy, vocal trustworthiness is associated with masculinized vocal features. Vocal traits such as pitch and formants have previously been associated with perceived social traits such as trustworthiness and dominance, but the link between these measurements and perceptions of cooperativeness have yet to be examined. In Experiment 1, cooperativeness ratings of male and female voices were examined against four vocal measurements: fundamental frequency (F0), pitch variation (F0-SD), formant dispersion (Df), and formant position (Pf). Feminine pitch traits (F0 and F0-SD) and masculine formant traits (Df and Pf) were associated with higher cooperativeness ratings. In Experiment 2, manipulated voices with feminized F0 were found to be more cooperative than voices with masculinized F0(,) among both male and female speakers, confirming our results from Experiment 1. Feminine pitch qualities may indicate an individual who is friendly and non-threatening, while masculine formant qualities may reflect an individual that is socially dominant or prestigious, and the perception of these associated traits may influence the perceived cooperativeness of the speakers.
Successful social behavior requires the accurate perception and interpretation of other peoples' actions. In the last decade, significant progress has been made in understanding how the human visual system analyzes bodily motion. Neurophysiological studies have identified two neural areas, the superior temporal sulcus (STS) and the premotor cortex, which play key roles in the visual perception of human movement. Patterns of neural activity in these areas are reflective of psychophysical measures of visual sensitivity to human movement. Both vary as a function of stimulus orientation and global stimulus structure. Human observers and STS responsiveness share some developmental similarities as both exhibit sensitivities that become increasingly tuned for upright, human movement. Furthermore, the observer's own visual and motor experience with an action as well as the social and emotional content of that action influence behavioral measures of visual sensitivity and patterns of neural activity in the STS and premotor cortex. Finally, dysfunction of motor processes, such as hemiplegia, and dysfunction of social processes, such as Autism, systematically impact visual sensitivity to human movement. In sum, a convergence of visual, motor, and social processes underlies our ability to perceive and interpret the actions of other people. WIREs Cogn Sci 2011 2 68-78 DOI: 10.1002/wcs.88 For further resources related to this article, please visit the WIREs website.
van der Hoort, Björn; Ehrsson, H. Henrik
The size of our body influences the perceived size of the world so that objects appear larger to children than to adults. The mechanisms underlying this effect remain unclear. It has been difficult to dissociate visual rescaling of the external environment based on an individual’s visible body from visual rescaling based on a central multisensory body representation. To differentiate these potential causal mechanisms, we manipulated body representation without a visible body by taking advantage of recent developments in body representation research. Participants experienced the illusion of having a small or large invisible body while object-size perception was tested. Our findings show that the perceived size of test-objects was determined by the size of the invisible body (inverse relation), and by the strength of the invisible body illusion. These findings demonstrate how central body representation directly influences visual size perception, without the need for a visible body, by rescaling the spatial representation of the environment. PMID:27708344
Jaegle, Andrew; Ro, Tony
We examined the causal relationship between the phase of alpha oscillations (9-12 Hz) and conscious visual perception using rhythmic TMS (rTMS) while simultaneously recording EEG activity. rTMS of posterior parietal cortex at an alpha frequency (10 Hz), but not occipital or sham rTMS, both entrained the phase of subsequent alpha oscillatory activity and produced a phase-dependent change on subsequent visual perception, with lower discrimination accuracy for targets presented at one phase of the alpha oscillatory waveform than for targets presented at the opposite phase. By extrinsically manipulating the phase of alpha before stimulus presentation, we provide direct evidence that the neural circuitry in the parietal cortex involved with generating alpha oscillations plays a causal role in determining whether or not a visual stimulus is successfully perceived.
Chung, Jae-Moon; Ohnishi, Noboru
Animals have been considered to develop ability for interpreting images captured on their retina by themselves gradually from their birth. For this they do not need external supervisor. We think that the visual function is obtained together with the development of hand reaching and grasping operations which are executed by active interaction with environment. On the viewpoint of hand teaches eye, this paper shows how visual space perception is developed in a simulated robot. The robot has simplified human-like structure used for hand-eye coordination. From the experimental results it may be possible to validate the method to describe how visual space perception of biological systems is developed. In addition the description gives a way to self-calibrate the vision of intelligent robot based on learn by doing manner without external supervision.
Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.
During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908
Petroni, Agustin; Carbajal, M. Julia; Sigman, Mariano
The neurobiology of reaching has been extensively studied in human and non-human primates. However, the mechanisms that allow a subject to decide—without engaging in explicit action—whether an object is reachable are not fully understood. Some studies conclude that decisions near the reach limit depend on motor simulations of the reaching movement. Others have shown that the body schema plays a role in explicit and implicit distance estimation, especially after motor practice with a tool. In this study we evaluate the causal role of multisensory body representations in the perception of reachable space. We reasoned that if body schema is used to estimate reach, an illusion of the finger size induced by proprioceptive stimulation should propagate to the perception of reaching distances. To test this hypothesis we induced a proprioceptive illusion of extension or shrinkage of the right index finger while participants judged a series of LEDs as reachable or non-reachable without actual movement. Our results show that reach distance estimation depends on the illusory perceived size of the finger: illusory elongation produced a shift of reaching distance away from the body whereas illusory shrinkage produced the opposite effect. Combining these results with previous findings, we suggest that deciding if a target is reachable requires an integration of body inputs in high order multisensory parietal areas that engage in movement simulations through connections with frontal premotor areas. PMID:26110274
Zhou, Y H; Gao, J B; White, K D; Merk, I; Yao, K
Perceptual multistability, alternative perceptions of an unchanging stimulus, gives important clues to neural dynamics. The present study examined 56 perceptual dominance time series for a Necker cube stimulus, for ambiguous motion, and for binocular rivalry. We made histograms of the perceptual dominance times, based on from 307 to 2478 responses per time series (median=612), and compared these histograms to gamma, lognormal and Weibull fitted distributions using the Kolmogorov-Smirnov goodness-of-fit test. In 40 of the 56 tested cases a lognormal distribution provided an acceptable fit to the histogram (in 24 cases it was the only fit). In 16 cases a gamma distribution, and in 11 cases a Weibull distribution, were acceptable but never as the only fit in either case. Any of the three distributions were acceptable in three cases and none provided acceptable fits in 12 cases. Considering only the 16 cases in which a lognormal distribution was rejected ( p<0.05) revealed that minor adjustments to the fourth-moment term of the lognormal characteristic function restored good fits. These findings suggest that random fractal theory might provide insight into the underlying mechanisms of multistable perceptions.
Goyal, Manu S; Hansen, Peter J; Blakemore, Colin B
When blind people touch Braille characters, blood flow increases in visual areas, leading to speculation that visual circuitry assists tactile discrimination in the blind. We tested this hypothesis in a functional magnetic resonance imaging study designed to reveal activation appropriate to the nature of tactile stimulation. In late-blind individuals, hMT/V5 and fusiform face area activated during visual imagery of moving patterns or faces. When they touched a doll's face, right fusiform face area was again activated. Equally, hMT/V5 was activated when objects moved over the skin. We saw no difference in hMT/V5 or fusiform face area activity during motion or face perception in the congenitally blind. We conclude that specialized visual areas, once established through visual experience, assist equivalent tactile identification tasks years after the onset of blindness.
van Laarhoven, Thijs; Keetels, Mirjam; Schakel, Lemmy; Vroomen, Jean
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip-read information that disambiguates noise-masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio-visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.
have demonstrated that the orientations 2 MATIN visually perceived as vertical (VPV) and horizontal ( VPH ) retain their relation although both are...There appeared to be an increase in accuracy in pointing to a target when the knowledge of distance to that target was no longer a factor . 4i...Ekstrom, R. B., French, J. W., Harman, H. H. & Dermen, D. (1995) Manual for kit of factor - referenced cognitive tests. Educational Testing Service
Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves
Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral,…
Menda, Gil; Shamble, Paul S; Nitzany, Eyal I; Golden, James R; Hoy, Ronald R
Jumping spiders (Salticidae) are renowned for a behavioral repertoire that can seem more vertebrate, or even mammalian, than spider-like in character. This is made possible by a unique visual system that supports their stalking hunting style and elaborate mating rituals in which the bizarrely marked and colored appendages of males highlight their song-and-dance displays. Salticids perform these tasks with information from four pairs of functionally specialized eyes, providing a near 360° field of view and forward-looking spatial resolution surpassing that of all insects and even some mammals, processed by a brain roughly the size of a poppy seed. Salticid behavior, evolution, and ecology are well documented, but attempts to study the neurophysiological basis of their behavior had been thwarted by the pressurized nature of their internal body fluids, making typical physiological techniques infeasible and restricting all previous neural work in salticids to a few recordings from the eyes. We report the first survey of neurophysiological recordings from the brain of a jumping spider, Phidippus audax (Salticidae). The data include single-unit recordings in response to artificial and naturalistic visual stimuli. The salticid visual system is unique in that high-acuity and motion vision are processed by different pairs of eyes. We found nonlinear interactions between the principal and secondary eyes, which can be inferred from the emergence of spatiotemporal receptive fields. Ecologically relevant images, including prey-like objects such as flies, elicited bursts of excitation from single units.
Houtkamp, Roos; Roelfsema, Pieter R
The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some situations, but we demonstrate that there are also situations where Gestalt grouping becomes serial. We observe substantial time delays when image elements have to be grouped indirectly through a chain of local groupings. We call this chaining process incremental grouping and demonstrate that it can occur for only a single object at a time. We suggest that incremental grouping requires the gradual spread of object-based attention so that eventually all the object's parts become grouped explicitly by an attentional labeling process. Our findings inspire a new incremental grouping theory that relates the parallel, local grouping process to feedforward processing and the serial, incremental grouping process to recurrent processing in the visual cortex.
The purpose of the present study was to establish a method for objective measurements of visual readaptation after flash exposures and to define a model for measurements. Influences of target direction, luminance and velocity on optokinetic nystagmus (OKN) were investigated under scotopic conditions. Visual readaptation was measured using OKN as an indicator of visual perception after exposure to a flash. The interval between the triggering of the flash and the reoccurrence of OKN was defined as the visual readaptation time (RAT). A Goldmann perimeter hemisphere was used for flash stimulation. A horizontally moving vertical grating projected inside the hemisphere was used as the OKN stimulus. Eye movements were recorded by DC electrooculography (EOG). The dependence of RAT on the dose of the flash, the wavelength of the flash and the luminance of the OKN target were investigated. The precision of the measurement method was studied. This includes the analysis of the variance due to the experimental occasions, the repeated exposures, the sexes of the subjects, the methods for recognition of OKN and the ways of visual adaptation before measurements. The contributions of retinal receptor and the neural activity to RAT were investigated by electroretinography (ERG). The influences of target direction and luminance on binocular motion perception and OKN as well as monocular OKN were examined at various target velocities. The dependence of the frequency and amplitude of eye jerks during monocular OKN on target luminance and velocity were also examined. It was found that RAT increases with increasing doses of the flash or decreasing luminance of the grating. RAT is most extended after flashes near 520 nm. RAT does not differ between experimental occasions, between a manual and a semi-automatic method for recognition of OKN, between the sexes and between goggle adaptation and ordinary dark adaptation. There is a reduction of RAT due to repeated flash exposures. The data
van der Linden, Sander
Examining the conceptual relationship between personal experience, affect, and risk perception is crucial in improving our understanding of how emotional and cognitive process mechanisms shape public perceptions of climate change. This study is the first to investigate the interrelated nature of these variables by contrasting three prominent social-psychological theories. In the first model, affect is viewed as a fast and associative information processing heuristic that guides perceptions of risk. In the second model, affect is seen as flowing from cognitive appraisals (i.e., affect is thought of as a post-cognitive process). Lastly, a third, dual-process model is advanced that integrates aspects from both theoretical perspectives. Four structural equation models were tested on a national sample (N = 808) of British respondents. Results initially provide support for the “cognitive” model, where personal experience with extreme weather is best conceptualized as a predictor of climate change risk perception and, in turn, risk perception a predictor of affect. Yet, closer examination strongly indicates that at the same time, risk perception and affect reciprocally influence each other in a stable feedback system. It is therefore concluded that both theoretical claims are valid and that a dual-process perspective provides a superior fit to the data. Implications for theory and risk communication are discussed. © 2014 The Authors. European Journal of Social Psychology published by John Wiley & Sons, Ltd. PMID:25678723
Hachisu, Taku; Kajimoto, Hiroyuki
We investigated the effect of vibration feedback latency on material perception during a tapping interaction using a rod device. When a user taps a surface, the perception of the material can be modulated by providing a decaying sinusoidal vibration at the moment of contact. To achieve this haptic material augmentation on a touchscreen, a system that can measure the approach velocity and provide vibration with low latency is required. To this end, we developed a touchscreen system that is capable of measuring the approach velocity and providing vibration feedback via a rod device with latency of 0.1 ms. Using this system, we experimentally measured the human detection threshold of the vibration feedback latency adopting a psychophysical approach. We further investigated the effect of latency on the perception of the material using a subjective questionnaire. Results show that the threshold was around 5.5 ms and the latency made the user feel that the surface is soft. In addition, users reported bouncing and denting sensations induced by the latency.
Yun, Hyo-Soon; Kim, Eunhwi; Suh, Soon-Rim; Kim, Mi-Han; Kim, Hong
This study investigated the influence of diabetes on cognitive decline between the diabetes and non- diabetes patients and identified the associations between diabetes and cognitive function, visual perception (VP), and visual motor integration (VMI). Sixty elderly men (67.10± 1.65 yr) with and without diabetes (n= 30 in each group) who were surveyed by interview and questionnaire in South Korea were enrolled in this study. The score of Mini-Mental State Examination of Korean version (MMSE-KC), Motor-free Visual Perception Test-Vertical Format (MVPT-V), and Visual-Motor Integration 3rd Revision (VMI-3R) were assessed in all of the participants to evaluate cognitive function, VP, and VMI in each. The score of MMSE-KC in the diabetic group was significantly lower than that of the non-diabetes group (P< 0.01). Participants in the diabetes group also had lower MVPT-V and VMI-3R scores than those in the non-diabetes group (P< 0.01, respectively). Especially, the scores of figure-ground and visual memory among the subcategories of MVPT-V were significantly lower in the diabetes group than in the non-diabetes group (P< 0.01). These findings indicate that the decline in cognitive function in individuals with diabetes may be greater than that in non-diabetics. In addition, the cognitive decline in older adults with diabetes might be associated with the decrease of VP and VMI. In conclusion, we propose that VP and VMI will be helpful to monitor the change of cognitive function in older adults with diabetes as part of the routine management of diabetes-induced cognitive declines.
Aspell, Jane Elizabeth; Heydrich, Lukas; Marillier, Guillaume; Lavanchy, Tom; Herbelin, Bruno; Blanke, Olaf
Prominent theories highlight the importance of bodily perception for self-consciousness, but it is currently not known whether bodily perception is based on interoceptive or exteroceptive signals or on integrated signals from these anatomically distinct systems. In the research reported here, we combined both types of signals by surreptitiously providing participants with visual exteroceptive information about their heartbeat: A real-time video image of a periodically illuminated silhouette outlined participants' (projected, "virtual") bodies and flashed in synchrony with their heartbeats. We investigated whether these "cardio-visual" signals could modulate bodily self-consciousness and tactile perception. We report two main findings. First, synchronous cardio-visual signals increased self-identification with and self-location toward the virtual body, and second, they altered the perception of tactile stimuli applied to participants' backs so that touch was mislocalized toward the virtual body. We argue that the integration of signals from the inside and the outside of the human body is a fundamental neurobiological process underlying self-consciousness.
Lindemann, Oliver; Bekkering, Harold
In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise- or a counterclockwise direction. Action execution had to be delayed until the appearance of a visual go signal, which induced an apparent rotational motion in either a clockwise- or a counterclockwise direction. Stimulus detection was faster when the direction of the induced apparent motion was consistent with the direction of the concurrently intended manual object rotation. Responses to action-consistent motions were also faster when the participants prepared the manipulation actions but signaled their stimulus detections with another motor effector (i.e., with a foot response). Taken together, the present study demonstrates a motor-visual priming effect of prepared object manipulations on visual motion perception, indicating a bidirectional functional link between action and perception beyond object-related visuomotor associations.
26 Physiological Research of the Visual System .......... 28 Chemical Neuroresearch . .. .. .. .. .. .. .. .. 28 Bioelectric...supported by physiological research where points on the surface of the primary visual cortex are stimulated and the resulting response is observed. Such...Introd-c-t ion Study of the mammalian visual perception process requires knowledge of the cnatomical, physiological , and psychological aspects of the
Sikl, Radovan; Simeček, Michal
People confined to a closed space live in a visual environment that differs from a natural open-space environment in several respects. The view is restricted to no more than a few meters, and nearby objects cannot be perceived relative to the position of a horizon. Thus, one might expect to find changes in visual space perception as a consequence of the prolonged experience of confinement. The subjects in our experimental study were participants of the Mars-500 project and spent nearly a year and a half isolated from the outside world during a simulated mission to Mars. The participants were presented with a battery of computer-based psychophysical tests examining their performance on various 3-D perception tasks, and we monitored changes in their perceptual performance throughout their confinement. Contrary to our expectations, no serious effect of the confinement on the crewmembers' 3-D perception was observed in any experiment. Several interpretations of these findings are discussed, including the possibilities that (1) the crewmembers' 3-D perception really did not change significantly, (2) changes in 3-D perception were manifested in the precision rather than the accuracy of perceptual judgments, and/or (3) the experimental conditions and the group sample were problematic.
This paper continues my 2014 February IS and T/SPIE Convention exploration into the relationship of stereoscopic vision and consciousness (90141F-1). It was proposed then that by using stereoscopic imaging people may consciously experience, or see, what they are viewing and thereby help make them more aware of the way their brains manage and interpret visual information. Environmental imaging was suggested as a way to accomplish this. This paper is the result of further investigation, research, and follow-up imaging. A show of images, that is a result of this research, allows viewers to experience for themselves the effects of stereoscopy on consciousness. Creating dye-infused aluminum prints while employing ChromaDepth® 3D glasses, I hope to not only raise awareness of visual processing but also explore the differences and similarities between the artist and scientist―art increases right brain spatial consciousness, not only empirical thinking, while furthering the viewer's cognizance of the process of seeing. The artist must abandon preconceptions and expectations, despite what the evidence and experience may indicate in order to see what is happening in his work and to allow it to develop in ways he/she could never anticipate. This process is then revealed to the viewer in a show of work. It is in the experiencing, not just from the thinking, where insight is achieved. Directing the viewer's awareness during the experience using stereoscopic imaging allows for further understanding of the brain's function in the visual process. A cognitive transformation occurs, the preverbal "left/right brain shift," in order for viewers to "see" the space. Using what we know from recent brain research, these images will draw from certain parts of the brain when viewed in two dimensions and different ones when viewed stereoscopically, a shift, if one is looking for it, which is quite noticeable. People who have experienced these images in the context of examining their own
Gratton, Caterina; Yousef, Sahar; Aarts, Esther; Wallace, Deanna L; D'Esposito, Mark; Silver, Michael A
The neuromodulator acetylcholine (ACh) modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target/flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine (DA) or norepinephrine (NE). Unlike cholinergic enhancement, DA (bromocriptine) and NE (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors - effects that are notably similar to those of spatial selective attention.Significance StatementAcetylcholine influences how visual cortical neurons integrate signals across space - perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains
Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M
Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'.
Cole, Shana; Balcetis, Emily; Zhang, Sam
Regulatory conflict can emerge when people experience a strong motivation to act on goals but a conflicting inclination to withhold action because physical resources available, or "physiological potentials", are low. This study demonstrated that distance perception is biased in ways that theory suggests assists in managing this conflict.…
Hanslmayr, Simon; Volberg, Gregor; Wimber, Maria; Dalal, Sarang S; Greenlee, Mark W
Although we have the impression that visual information flows continuously from our sensory channels, recent studies indicate that this is likely not the case. Rather, we sample visual stimuli rhythmically, oscillating at 5-10 Hz. Electroencephalography (EEG) studies have demonstrated that this rhythmicity is reflected by the phase of ongoing brain oscillations in the same frequency. Theoretically, brain oscillations could underlie the rhythmic nature of perception by providing transient time windows for information exchange, but this question has not yet been systematically addressed. We recorded simultaneous EEG-fMRI while human participants performed a contour integration task and show that ongoing brain oscillations prior to stimulus onset predict functional connectivity between higher and lower level visual processing regions. Specifically, our results demonstrate that the phase of a 7 Hz oscillation prior to stimulus onset predicts perceptual performance and the bidirectional information flow between the left lateral occipital cortex and right intraparietal sulcus, as indicated by psychophysiological interaction and dynamic causal modeling. These findings suggest that human brain oscillations periodically gate visual perception at around 7 Hz by providing transient time windows for long-distance cortical information transfer. Such gating might be a general mechanism underlying the rhythmic nature of human perception.
Wang, Jian; Kreiser, Matthias; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal
2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.
Baroncelli, L; Braschi, C; Maffei, L
A proper maturation of stereoscopic functions requires binocular visual experience and early disruption of sensory-driven activity can result in long-term or even permanent visual function impairment. Amblyopia is one paradigmatic case of visual system disorder, with early conditions of functional imbalance between the two eyes leading to severe deficits of visual acuity and depth-perception abilities. In parallel to the reduction of neural plasticity levels, the brain potential for functional recovery declines with age. Recent evidence has challenged this traditional view and experimental paradigms enhancing experience-dependent plasticity in the adult brain have been described. Here, we show that environmental enrichment (EE), a condition of increased cognitive and sensory-motor stimulation, restores experience-dependent plasticity of stereoscopic perception in response to sensory deprivation well after the end of the critical period and reinstates depth-perception abilities of adult amblyopic animals in the range of normal values. Our results encourage efforts in the clinical application of paradigms based on EE as an intervention strategy for treating amblyopia in adulthood.
Selinger, Lenka; Domínguez-Borràs, Judith; Escera, Carles
Emotionally negative stimuli boost perceptual processes. There is little known, however, about the timing of this modulation. The present study aims at elucidating the phasic effects of, emotional processing on auditory processing within subsequent time-windows of visual emotional, processing in humans. We recorded the electroencephalogram (EEG) while participants responded to a, discrimination task of faces with neutral or fearful expressions. A brief complex tone, which subjects, were instructed to ignore, was displayed concomitantly, but with different asynchronies respective to, the image onset. Analyses of the N1 auditory event-related potential (ERP) revealed enhanced brain, responses in presence of fearful faces. Importantly, this effect occurred at picture-tone asynchronies of, 100 and 150ms, but not when these were displayed simultaneously, or at 50ms or 200ms asynchrony. These results confirm the existence of a fast-operating crossmodal effect of visual emotion on auditory, processing, suggesting a phasic variation according to the time-course of emotional processing.
Krisch, I; Hosticka, B J
Microsystem technologies offer significant advantages in the development of neural prostheses. In the last two decades, it has become feasible to develop intelligent prostheses that are fully implantable into the human body with respect to functionality, complexity, size, weight, and compactness. Design and development enforce collaboration of various disciplines including physicians, engineers, and scientists. The retina implant system can be taken as one sophisticated example of a prosthesis which bypasses neural defects and enables direct electrical stimulation of nerve cells. This micro implantable visual prosthesis assists blind patients to return to the normal course of life. The retina implant is intended for patients suffering from retinitis pigmentosa or macular degeneration. In this contribution, we focus on the epiretinal prosthesis and discuss topics like system design, data and power transfer, fabrication, packaging and testing. In detail, the system is based upon an implantable micro electro stimulator which is powered and controlled via a wireless inductive link. Microelectronic circuits for data encoding and stimulation are assembled on flexible substrates with an integrated electrode array. The implant system is encapsulated using parylene C and silicone rubber. Results extracted from experiments in vivo demonstrate the retinotopic activation of the visual cortex.
The pilot's perception and performance in flight simulators is examined. The areas investigated include: vestibular stimulation, flight management and man cockpit information interfacing, and visual perception in flight simulation. The effects of higher levels of rotary acceleration on response time to constant acceleration, tracking performance, and thresholds for angular acceleration are examined. Areas of flight management examined are cockpit display of traffic information, work load, synthetic speech call outs during the landing phase of flight, perceptual factors in the use of a microwave landing system, automatic speech recognition, automation of aircraft operation, and total simulation of flight training.
Remmel-Gehm, Mary T.
This report discusses the outcomes of a study that investigated how visual media would affect the communication skills of a 13-year-old nonverbal girl with cerebral palsy and whether the use of visual media would provide documentation of higher cognitive functioning. For the study, the subject used three different tools to add visual information…
Guo, Yulin; Liu, Fengfeng; Lu, Yuanan; Mao, Zongfu; Lu, Hanson; Wu, Yanyan; Chu, Yuanyuan; Yu, Lichen; Liu, Yisi; Ren, Meng; Li, Na; Chen, Xi; Xiang, Hao
The perception of air quality significantly affects the acceptance of the public of the government's environmental policies. The aim of this research is to explore the relationship between the perception of the air quality of parents and scientific monitoring data and to analyze the factors that affect parents' perceptions. Scientific data of air quality were obtained from Wuhan's environmental condition reports. One thousand parents were investigated for their knowledge and perception of air quality. Scientific data show that the air quality of Wuhan follows an improving trend in general, while most participants believed that the air quality of Wuhan has deteriorated, which indicates a significant difference between public perception and reality. On the individual level, respondents with an age of 40 or above (40 or above: OR = 3.252; 95% CI: 1.170-9.040), a higher educational level (college and above: OR = 7.598; 95% CI: 2.244-25.732) or children with poor healthy conditions (poor: OR = 6.864; 95% CI: 2.212-21.302) have much more negative perception of air quality. On the community level, industrial facilities, vehicles and city construction have major effects on parents' perception of air quality. Our investigation provides baseline information for environmental policy researchers and makers regarding the public's perception and expectation of air quality and the benefits to the environmental policy completing and enforcing.
Swallow, Khena M.; Zacks, Jeffrey M.; Abrams, Richard A.
Memory for naturalistic events over short delays is important for visual scene processing, reading comprehension, and social interaction. The research presented here examined relations between how an ongoing activity is perceptually segmented into events and how those events are remembered a few seconds later. In several studies, participants…
Alshaer, Abdulaziz; Regenbrecht, Holger; O'Hare, David
Virtual Reality based driving simulators are increasingly used to train and assess users' abilities to operate vehicles in a controlled and safe way. For the development of those simulators it is important to identify and evaluate design factors affecting perception, behaviour, and driving performance. In an exemplary power wheelchair simulator setting we identified the three immersion factors display type (head-mounted display v monitor), ability to freely change the field of view (FOV), and the visualisation of the user's avatar as potentially affecting perception and behaviour. In a study with 72 participants we found all three factors affected the participants' sense of presence in the virtual environment. In particular the display type significantly affected both perceptual and behavioural measures whereas FOV only affected behavioural measures. Our findings could guide future Virtual Reality simulator designers to evoke targeted user behaviours and perceptions.
stationary patterns need to be roughly proxi- mate with respect to the visual field, they do not have to be alike. Grindley and Wilkinson ( 1953 ) asked...some long and constant dura- tion. Riopelle and Bevan ( 1953 , cited in Haines, 1975) examined abso- lute sensitivity at many points throughout the visual...the frequency at which the flicker appears to fuse. Ginsburg (1970) compiled a bibliography on CFF covering the period 1953 to 1968 and numbering 1293
Soap operas and dramas attract huge audiences and seek to reflect real life (Caughie 2000), yet little has been written about whether depictions of health problems in these productions colour public perceptions of illness. This article examines two portrayals of ageing and memory loss, one from BBC Radio 4's The Archers and one from a BBC television adaptation of Alan Bennett's dramatic monologue Talking Heads. It uses healthcare and media literature to compare their use of realism and assess their likely effect on public awareness. The implications of dramatic representations of memory loss for nurses who provide information and support to patients newly diagnosed with memory problems and their families are discussed.
Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T
We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.
Gibb, Randall William
Research has attempted to identify which visual cues are most salient for glide path (GP) performance during an approach to landing by a pilot flying in both rich and impoverished visual conditions. Numerous aviation accidents have occurred when a shallow GP was induced by a black hole illusion (BHI) or featureless terrain environment during night visual approaches to landing. Identifying the landing surface's orientation as well as size, distance, and depth cues are critical for a safe approach to landing. Twenty pilots accomplished simulated approaches while exposed to manipulated visual cues of horizon, runway length/width (ratio), random terrain objects, and approach lighting system (ALS) configurations. Participants were assessed on their performance relative to a 3 degree GP in terms of precision, bias, and stability in both degrees and altitude deviation over a distance of 5 nm (9.3 km) assessed at equal intervals to landing. Runway ratio and distance from the runway were the most dominant aspects of the visual scene that differentiated pilot performance and mediated other visual cues. The horizon was most influential for the first two-thirds of the approach and random terrain objects influenced the final portion. An ALS commonly used at airports today, mediated by a high runway ratio, induced shallow GPs; however, the worst GP performance regardless of ratio, was a combination ALS consisting of both side and approach lights. Pilot performance suggested a three-phase perceptual process, Assess-Act-React, used by pilots as they accumulated visual cues to guide their behavior. Perceptual learning demonstrated that despite recognition of the BH approach, pilots confidently flew dangerously low but did improve with practice implying that visual spatial disorientation education and training would be effective if accomplished in flight simulators.
Kumar, Anita B.; Morrison, Steven J.
Ensemble conductors are often described as embodying the music. Researchers have determined that expressive gestures affect viewers’ perceptions of conducted ensemble performances. This effect may be due, in part, to conductor gesture delineating and amplifying specific expressive aspects of music performances. The purpose of the present study was to determine if conductor gesture affected observers’ focus of attention to contrasting aspects of ensemble performances. Audio recordings of two different music excerpts featuring two-part counterpoint (an ostinato paired with a lyric melody, and long chord tones paired with rhythmic interjections) were paired with video of two conductors. Each conductor used gesture appropriate to one or the other musical element (e.g., connected and flowing or detached and crisp) for a total of sixteen videos. Musician participants evaluated 8 of the excerpts for Articulation, Rhythm, Style, and Phrasing using four 10-point differential scales anchored by descriptive terms (e.g., disconnected to connected, and angular to flowing.) Results indicated a relationship between gesture and listeners’ evaluations of musical content. Listeners appear to be sensitive to the manner in which a conductor’s gesture delineates musical lines, particularly as an indication of overall articulation and style. This effect was observed for the lyric melody and ostinato excerpt, but not for the chords and interjections excerpt. Therefore, this effect appears to be mitigated by the congruence of gesture to preconceptions of the importance of melodic over rhythmic material, of certain instrument timbres over others, and of length between onsets of active material. These results add to a body of literature that supports the importance of the visual component in the multimodal experience of music performance. PMID:27458425
Creel, Sarah C
Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was presented simultaneously with a melody. After a listener was familiarized with melodies embedded in contexts, the listener heard melodies in isolation and judged the fit of a final harmonic or metrical probe event. The probe event matched either the familiar (but absent) context or an unfamiliar context. For both harmonic (Experiments 1 and 3) and metrical (Experiment 2) information, exposure to context shifted listeners' preferences toward a probe matching the context that they had been familiarized with. This suggests that listeners rapidly form specific musical memories without explicit instruction, which are then activated during music listening. These data pose an interesting challenge for models of music perception which implicitly assume that the listener's knowledge base is predominantly schematic or abstract.
Fink, B; Matts, P J; Röder, S; Johnson, R; Burquest, M
Perception of age and health is critical in the judgement of attractiveness. The few studies conducted on the significance of apparent skin condition on human physical appearance have studied faces alone or isolated fields of images facial skin. Little is known about whether perception of the face matches that of other body parts or if body skin affects overall age and attractiveness perception when presented in combination with facial skin. We hypothesized that independent presentation of female faces, chests and arms (including hands) - cropped from a full face and upper body image - would result in significant differences in perception of age and attractiveness compared to the corresponding composite. Furthermore, we sought to investigate whether relatively young and attractive looking skin on selected, individual parts of the body affects overall perception. Digital photographs of 52 women aged 45-65 years were collected and processed to yield four derivative sets of images: One set showed the composite of all features, i.e. the face, the chest and the arms, whereas the other three were cropped carefully to show each part of the upper body described above independently. A total of 240 participants judged these faces for perceived age and attractiveness. Our results showed significant differences in perception with the chest and the arms being judged significantly younger than the face or composite image of the same women. Moreover, arms and chest images were perceived as more attractive than face and composite images. Finally, regression analysis indicated that differences between the perceived and chronological values of overall age perception could be predicted by age perception of the face and arms. These results continue to support the significance of facial age perception in assessment of a woman's age, but highlight that body skin also plays a role in overall age impression.
Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min
The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.
Di Luca, M; Knörlein, B; Ernst, M O; Harders, M
Spring compliance is perceived by combining the sensed force exerted by the spring with the displacement caused by the action (sensed through vision and proprioception). We investigated the effect of delay of visual and force information with respect to proprioception to understand how visual-haptic perception of compliance is achieved. First, we confirm an earlier result that force delay increases perceived compliance. Furthermore, we find that perceived compliance decreases with a delay in the visual information. These effects of delay on perceived compliance would not be present if the perceptual system would utilize all force-displacement information available during the interaction. Both delays generate a bias in compliance which is opposite in the loading and unloading phases of the interaction. To explain these findings, we propose that information during the loading phase of the spring displacement is weighted more than information obtained during unloading. We confirm this hypothesis by showing that sensitivity to compliance during loading movements is much higher than during unloading movements. Moreover, we show that visual and proprioceptive information about the hand position are used for compliance perception depending on the sensitivity to compliance. Finally, by analyzing participants' movements we show that these two factors (loading/unloading and reliability) account for the change in perceived compliance due to visual and force delays.
Désage, Simon-Frédéric; Pitard, Gilles; Pillet, Maurice; Favrelière, Hugues; Maire, Jean-Luc; Frelin, Fabrice; Samper, Serge; Le Goïc, Gaëtan
The research purpose is to improve aesthetic anomalies detection and evaluation based on what is perceived by human eye and on the 2006 CIE report.1 It is therefore important to define parameters able to discriminate surfaces, in accordance with the perception of human eye. Our starting point in assessing aesthetic anomalies is geometric description such as defined by ISO standard,2 i.e. traduce anomalies description with perception words about texture divergence impact. However, human controllers observe (detect) the aesthetic anomaly by its visual effect and interpreter for its geometric description. The research question is how define generic parameters for discriminating aesthetic anomalies, from enhanced information of visual texture such as recent surface visual rendering approach. We propose to use an approach from visual texture processing that quantify spatial variations of pixel for translating changes in color, material and relief. From a set of images from different angles of light which gives us access to the surface appearance, we propose an approach from visual effect to geometrical specifications as the current standards have identified the aesthetic anomalies.
Geldof, C. J. A.; van Wassenaer, A. G.; de Kieviet, J. F.; Kok, J. H.; Oosterlaan, J.
A range of neurobehavioral impairments, including impaired visual perception and visual-motor integration, are found in very preterm born children, but reported findings show great variability. We aimed to aggregate the existing literature using meta-analysis, in order to provide robust estimates of the effect of very preterm birth on visual…
Poon, K. W.; Li-Tsang, C. W .P.; Weiss, T. P. L.; Rosenblum, S.
This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and…
Poon, K W; Li-Tsang, C W P; Weiss, T P L; Rosenblum, S
This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and occupational therapists to have handwriting difficulties. They were matched according to their age and then randomly assigned into either the control group or the experimental group. Subjects in the experimental group (n=13) would receive eight sessions of computerized visual perception and visual-motor integration training together with a home training program while those in the control group (n=13) would only receive conventional handwriting training by teachers, which focused mainly on remedial handwriting exercises. Results from repeated measure ANOVA revealed that children in the experimental group showed improvements in their visual perception skills as well as in their handwriting time. Both the "On Paper" time and "In Air" time of this group were improved when compared to the control group. However, no significant differences were found in visual-motor integration skill and handwriting legibility between the two groups after the intervention. This computerized training program focusing on visual perception and visual-motor integration training appeared to be effective in enhancing the handwriting time among children with handwriting difficulties. However, the training program did not seem to improve the legibility of children.
It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859
Jessen, Sarah; Kotz, Sonja A
Emotion perception naturally entails multisensory integration. It is also assumed that multisensory emotion perception is characterized by enhanced activation of brain areas implied in multisensory integration, such as the superior temporal gyrus and sulcus (STG/STS). However, most previous studies have employed designs and stimuli that preclude other forms of multisensory interaction, such as crossmodal prediction, leaving open the question whether classical integration is the only relevant process in multisensory emotion perception. Here, we used video clips containing emotional and neutral body and vocal expressions to investigate the role of crossmodal prediction in multisensory emotion perception. While emotional multisensory expressions increased activation in the bilateral fusiform gyrus (FFG), neutral expressions compared to emotional ones enhanced activation in the bilateral middle temporal gyrus (MTG) and posterior STS. Hence, while neutral stimuli activate classical multisensory areas, emotional stimuli invoke areas linked to unisensory visual processing. Emotional stimuli may therefore trigger a prediction of upcoming auditory information based on prior visual information. Such prediction may be stronger for highly salient emotional compared to less salient neutral information. Therefore, we suggest that multisensory emotion perception involves at least two distinct mechanisms; classical multisensory integration, as shown for neutral expressions, and crossmodal prediction, as evident for emotional expressions.
Norcia, Anthony M
Linking propositions have played an important role in refining our understanding of the relationship between neural activity and perception. Over the last 40 years, visual evoked potentials (VEPs) have been used in many different ways to address questions of the relationship between neural activity and perception. This review organizes and discusses this research within the linking proposition framework developed by Davida Teller, and her colleagues. A series of examples from the VEP literature illustrates each of the five classes of linking propositions originally proposed by Davida Teller. The related concept of the bridge locus-the site at which neural activity can be said to first be proscriptive of perception-is discussed and a suggestion is made that the concept be expanded to include an evolution over time and cortical area.
Niimi, Ryosuke; Watanabe, Katsumi
We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle) of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1). When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2). This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3). Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.
In this article, multichannel electroencephalogram (EEG) signals of artists and nonartists were analyzed during the performances of visual perception and mental imagery of paintings using cepstrum coefficients. Each of the calculated cepstrum coefficients and their parameters such as energy, average, standard deviation and entropy were separately used for distinguishing the two groups. It was found that a distinguishing coefficient might exist among the cepstrum coefficients, which could separate the two groups despite electrode placement. It was also observed that the two groups were distinguishable during the three states using the cepstrum coefficient parameters. However, separating the two groups was dependent on channel selection in this regard. The cepstrum coefficient parameters were found significantly lower for artists as compared to nonartists during the visual perception and the mental imagery, indicating a decreased average energy of EEG for artists. In addition, a similar significant decreasing trend in the cepstrum coefficient parameters was observed from occipital to frontal brain regions during the performances of the two cognitive tasks for the two groups, suggesting that visual perception and its mental imagery overlap in neuronal resources. The two groups were also classified using a neural gas classifier and a support vector machine classifier. The obtained average classification accuracies during the visual perception, the mental imagery, and at rest in the case of using the best selected distinguishable cepstrum coefficients were 76.87%, 77.5%, and 97.5%, respectively; however, a decrease in average recognition accuracy was found for classifying the two groups using the cepstrum coefficient parameters. PMID:28028496
Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Vassiou, Aikaterini; Mouratidis, Athanasios; Andreou, Eleni; Kafetsios, Konstantinos
Performance at school is affected not only by students' achievement goals but also by emotional exchanges among classmates and their teacher. In this study, we investigated relationships between students' achievement goals and emotion perception ability and class affect and performance. Participants were 949 Greek adolescent students in 49 classes…
Vandervaart, J. C.; Hosman, R. J. A. W.
A large number of roll rate stimuli, covering rates from zero to plus or minus 25 deg/sec, were presented to subjects in random order at 2 sec intervals. Subjects were to make estimates of magnitude of perceived roll rate stimuli presented on either a central display, on displays in the peripheral ield of vision, or on all displays simultaneously. Response was by way of a digital keyboard device, stimulus exposition times were varied. The present experiment differs from earlier perception tasks by the same authors in that mean rate perception error (and standard deviation) was obtained as a function of rate stimulus magnitude, whereas the earlier experiments only yielded mean absolute error magnitude. Moreover, in the present experiment, all stimulus rates had an equal probability of occurrence, whereas the earlier tests featured a Gaussian stimulus probability density function. Results yield a ood illustration of the nonlinear functions relating rate presented to rate perceived by human observers or operators.
Llamas, Joseph M.
Two hundred and sixty two K-12 teachers, ranging from pre-service to experienced teachers, and from elementary to high school, were surveyed regarding their perceptions of students based on gender, ethnicity, socioeconomic status, and behavior. Utilizing a five point scale that surveyed teachers' responses to narratives of student stereotypes…
Panagiotaropoulos, Theofanis I; Kapoor, Vishal; Logothetis, Nikos K
The combination of electrophysiological recordings with ambiguous visual stimulation made possible the detection of neurons that represent the content of subjective visual perception and perceptual suppression in multiple cortical and subcortical brain regions. These neuronal populations, commonly referred to as the neural correlates of consciousness, are more likely to be found in the temporal and prefrontal cortices as well as the pulvinar, indicating that the content of perceptual awareness is represented with higher fidelity in higher-order association areas of the cortical and thalamic hierarchy, reflecting the outcome of competitive interactions between conflicting sensory information resolved in earlier stages. However, despite the significant insights into conscious perception gained through monitoring the activities of single neurons and small, local populations, the immense functional complexity of the brain arising from correlations in the activity of its constituent parts suggests that local, microscopic activity could only partially reveal the mechanisms involved in perceptual awareness. Rather, the dynamics of functional connectivity patterns on a mesoscopic and macroscopic level could be critical for conscious perception. Understanding these emergent spatio-temporal patterns could be informative not only for the stability of subjective perception but also for spontaneous perceptual transitions suggested to depend either on the dynamics of antagonistic ensembles or on global intrinsic activity fluctuations that may act upon explicit neural representations of sensory stimuli and induce perceptual reorganization. Here, we review the most recent results from local activity recordings and discuss the potential role of effective, correlated interactions during perceptual awareness.
Berman, Marc G.; Hout, Michael C.; Kardan, Omid; Hunter, MaryCarol R.; Yourganov, Grigori; Henderson, John M.; Hanayik, Taylor; Karimi, Hossein; Jonides, John
Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature. PMID:25531411
Liu, Sheng; Angelaki, Dora E.
Visual and vestibular signals converge onto the dorsal medial superior temporal area (MSTd) of the macaque extrastriate visual cortex, which is thought to be involved in multisensory heading perception for spatial navigation. Peripheral otolith information, however, is ambiguous and cannot distinguish linear accelerations experienced during self-motion from those due to changes in spatial orientation relative to gravity. Here we show that, unlike peripheral vestibular sensors but similar to lobules 9 and 10 of the cerebellar vermis (nodulus and uvula), MSTd neurons respond selectively to heading and not to changes in orientation relative to gravity. In support of a role in heading perception, MSTd vestibular responses are also dominated by velocity-like temporal dynamics, which might optimize sensory integration with visual motion information. Unlike the cerebellar vermis, however, MSTd neurons also carry a spatial orientation-independent rotation signal from the semicircular canals, which could be useful in compensating for the effects of head rotation on the processing of optic flow. These findings show that vestibular signals in MSTd are appropriately processed to support a functional role in multisensory heading perception. PMID:19605631
Frank, David W.; Sabatinelli, Dean
Research has consistently revealed enhanced neural activation corresponding to attended cues coupled with suppression to unattended cues. This attention effect depends both on the spatial features of stimuli and internal task goals. However, a large majority of research supporting this effect involves circumscribed tasks that possess few ecologically relevant characteristics. By comparison, natural scenes have the potential to engage an evolved attention system, which may be characterized by supplemental neural processing and integration compared to mechanisms engaged during reduced experimental paradigms. Here, we describe recent animal and human studies of naturalistic scene viewing to highlight the specific impact of social and affective processes on the neural mechanisms of attention modulation. PMID:28265250
Otten, Marte; Seth, Anil K; Pinto, Yair
A growing body of research suggests that social contextual factors such as desires and goals, affective states and stereotypes can shape early perceptual processes. We suggest that a generative Bayesian approach towards perception provides a powerful theoretical framework to accommodate how such high-level social factors can influence low-level perceptual processes in their earliest stages. We review experimental findings that show how social factors shape the perception and evaluation of people, behaviour, and socially relevant objects or information. Subsequently, we summarize the generative view of perception within the 'Bayesian brain', and show how such a framework can account for the pervasive effects of top-down social knowledge on social cognition. Finally, we sketch the theoretical and experimental implications of social predictive perception, indicating new directions for research on the effects and neurocognitive underpinnings of social cognition.
Maloney, Erin A.; Risko, Evan F.; Ansari, Daniel; Fugelsang, Jonathan
Individuals with mathematics anxiety have been found to differ from their non-anxious peers on measures of higher-level mathematical processes, but not simple arithmetic. The current paper examines differences between mathematics anxious and non-mathematics anxious individuals in more basic numerical processing using a visual enumeration task.…
Gil-da-Costa, Ricardo; Braun, Allen; Lopes, Marco; Hauser, Marc D; Carson, Richard E; Herscovitch, Peter; Martin, Alex
Non-human primates produce a diverse repertoire of species-specific calls and have rich conceptual systems. Some of their calls are designed to convey information about concepts such as predators, food, and social relationships, as well as the affective state of the caller. Little is known about the neural architecture of these calls, and much of what we do know is based on single-cell physiology from anesthetized subjects. By using positron emission tomography in awake rhesus macaques, we found that conspecific vocalizations elicited activity in higher-order visual areas, including regions in the temporal lobe associated with the visual perception of object form (TE/TEO) and motion (superior temporal sulcus) and storing visual object information into long-term memory (TE), as well as in limbic (the amygdala and hippocampus) and paralimbic regions (ventromedial prefrontal cortex) associated with the interpretation and memory-encoding of highly salient and affective material. This neural circuitry strongly corresponds to the network shown to support representation of conspecifics and affective information in humans. These findings shed light on the evolutionary precursors of conceptual representation in humans, suggesting that monkeys and humans have a common neural substrate for representing object concepts.
Beets, I A M; Rösler, F; Fiehler, K
Few studies have reported direct effects of motor learning on visual perception, especially when using novel movements for the motor system. Atypical motor behaviors that violate movement constraints provide an excellent opportunity to study action-to-perception transfer. In our study, we passively trained blindfolded participants on movements violating the 2/3 power law. Before and after motor training, participants performed a visual discrimination task in which they decided whether two consecutive movements were same or different. For motor training, we randomly assigned the participants to two motor training groups or a control group. The motor training group experienced either a weak or a strong elliptic velocity profile on a circular trajectory that matched one of the visual test stimuli. The control group was presented with linear trajectories unrelated to the viewed movements. After each training session, participants actively reproduced the movement to assess motor learning. The group trained on the strong elliptic velocity profile reproduced movements with increasing elliptic velocity profiles while circular geometry remained constant. Furthermore, both training groups improved in visual discrimination ability for the learned movement as well as for highly similar movements. Participants in the control group, however, did not show any improvements in the visual discrimination task nor did participants who did not acquire the trained movement. The present results provide evidence for a transfer from action to perception which generalizes to highly related movements and depends on the success of motor learning. Moreover, under specific conditions, it seems to be possible to acquire movements deviating from the 2/3 power law.
Nawrot, Mark; Stroyan, Keith
One of vision's most important functions is specification of the layout of objects in the 3D world. While the static optical geometry of retinal disparity explains the perception of depth from binocular stereopsis, we propose a new formula to link the pertinent dynamic geometry to the computation of depth from motion parallax. Mathematically, the ratio of retinal image motion (motion) and smooth pursuit of the eye (pursuit) provides the necessary information for the computation of relative depth from motion parallax. We show that this could have been obtained with the approaches of Nakayama and Loomis [Nakayama, K., & Loomis, J. M. (1974). Optical velocity patterns, velocity-sensitive neurons, and space perception: A hypothesis. Perception, 3, 63-80] or Longuet-Higgens and Prazdny [Longuet-Higgens, H. C., & Prazdny, K. (1980). The interpretation of a moving retinal image. Proceedings of the Royal Society of London Series B, 208, 385-397] by adding pursuit to their treatments. Results of a psychophysical experiment show that changes in the motion/pursuit ratio have a much better relationship to changes in the perception of depth from motion parallax than do changes in motion or pursuit alone. The theoretical framework provided by the motion/pursuit law provides the quantitative foundation necessary to study this fundamental visual depth perception ability.
Costall, A P
Representational theories of perception postulate an isolated and autonomous "subject" set apart from its real environment, and then go on to invoke processes of mental representation, construction, or hypothesizing to explain how perception can nevertheless take place. Although James Gibson's most conspicuous contribution has been to challenge representational theory, his ultimate concern was the cognitivism which now prevails in psychology. He was convinced that the so-called cognitive revolution merely perpetuates, and even promotes, many of psychology's oldest mistakes. This review article considers Gibson's final statement of his "ecological" alternative to cognitivism (Gibson, 1979). It is intended not as a complete account of Gibson's alternative, however, but primarily as an appreciation of his critical contribution. Gibson's sustained attempt to counter representational theory served not only to reveal the variety of arguments used in support of this theory, but also to expose the questionable metaphysical assumptions upon which they rest. In concentrating upon Gibson's criticisms of representational theory, therefore, this paper aims to emphasize the point of his alternative scheme and to explain some of the important concerns shared by Gibson's ecological approach and operant psychology. PMID:6699538
Nummenmaa, Lauri; Hyönä, Jukka; Calvo, Manuel G
We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by a predefined target scene. The affective task involved saccading toward an unpleasant or pleasant scene, and the semantic task involved saccading toward a scene containing an animal. Both affective and semantic target scenes could be reliably categorized in less than 220 ms, but semantic categorization was always faster than affective categorization. This finding was replicated with singly, foveally presented scenes and manual responses. In comparison with foveal presentation, extrafoveal presentation slowed down the categorization of affective targets more than that of semantic targets. Exposure threshold for accurate categorization was lower for semantic information than for affective information. Superordinate-, basic-, and subordinate-level semantic categorizations were faster than affective evaluation. We conclude that affective analysis of scenes cannot bypass object recognition. Rather, semantic categorization precedes and is required for affective evaluation.
Damnjanovic, Vesna; Jednak, Sandra; Mijatovic, Ivana
The purpose of this research paper is to identify the factors affecting the effectiveness of Moodle from the students' perspective. The research hypotheses derived from the suggested extended Seddon model have been empirically validated using the responses to a survey on e-learning usage among 255 users. We tested the model across higher education…
Keita, Akilah Dulin; Casazza, Krista; Thomas, Olivia; Fernandez, Jose R.
Objective: The primary purpose of this study was to determine if perceived neighborhood disorder affected dietary quality within a multiethnic sample of children. Design: Children were recruited through the use of fliers, wide-distribution mailers, parent magazines, and school presentations from June 2005 to December 2008. Setting:…
Ungerleider, Leslie G.; Bell, Andrew H.
The ability to rapidly and accurately recognize visual stimuli represents a significant computational challenge. Yet, despite such complexity, the primate brain manages this task effortlessly. How it does so remains largely a mystery. The study of visual perception and object recognition was once limited to investigations of brain-damaged individuals or lesion experiments in animals. However, in the last 25 years, new methodologies, such as functional neuroimaging and advances in electrophysiological approaches, have provided scientists with the opportunity to examine this problem from new perspectives. This review highlights how some of these recent technological advances have contributed to the study of visual processing and where we now stand with respect to our understanding of neural mechanisms underlying object recognition. PMID:20971130
Neri, Peter; Levi, Dennis M
Electrophysiological recordings have established that motion and disparity signals are jointly encoded by subpopulations of neurons in visual cortex. However, the question of whether these neurons play a perceptual role has proven challenging and remains open. To answer this question we combined two powerful psychophysical techniques: perceptual adaptation and reverse correlation. Our results provide a detailed picture of how visual information about motion and disparity is processed by human observers, and how this processing is modified by prolonged sensory stimulation. We were able to isolate two perceptual components: a separable component, supported by separate motion and disparity signals, and an inseparable joint component, supported by motion and disparity signals that are concurrently represented at the level of the same neural mechanism. Both components are involved in the perception of stimuli containing motion and disparity information in line with the known existence of corresponding neuronal subpopulations in visual cortex.
Kaufman, Liam D.; Culham, Jody C.
Behavioral and neuropsychological research suggests that delayed actions rely on different neural substrates than immediate actions; however, the specific brain areas implicated in the two types of actions remain unknown. We used functional magnetic resonance imaging (fMRI) to measure human brain activation during delayed grasping and reaching. Specifically, we examined activation during visual stimulation and action execution separated by a 18-s delay interval in which subjects had to remember an intended action toward the remembered object. The long delay interval enabled us to unambiguously distinguish visual, memory-related, and action responses. Most strikingly, we observed reactivation of the lateral occipital complex (LOC), a ventral-stream area implicated in visual object recognition, and early visual cortex (EVC) at the time of action. Importantly this reactivation was observed even though participants remained in complete darkness with no visual stimulation at the time of the action. Moreover, within EVC, higher activation was observed for grasping than reaching during both vision and action execution. Areas in the dorsal visual stream were activated during action execution as expected and, for some, also during vision. Several areas, including the anterior intraparietal sulcus (aIPS), dorsal premotor cortex (PMd), primary motor cortex (M1) and the supplementary motor area (SMA), showed sustained activation during the delay phase. We propose that during delayed actions, dorsal-stream areas plan and maintain coarse action goals; however, at the time of execution, motor programming requires re-recruitment of detailed visual information about the object through reactivation of (1) ventral-stream areas involved in object perception and (2) early visual areas that contain richly detailed visual representations, particularly for grasping. PMID:24040007
Singhal, Anthony; Monaco, Simona; Kaufman, Liam D; Culham, Jody C
Behavioral and neuropsychological research suggests that delayed actions rely on different neural substrates than immediate actions; however, the specific brain areas implicated in the two types of actions remain unknown. We used functional magnetic resonance imaging (fMRI) to measure human brain activation during delayed grasping and reaching. Specifically, we examined activation during visual stimulation and action execution separated by a 18-s delay interval in which subjects had to remember an intended action toward the remembered object. The long delay interval enabled us to unambiguously distinguish visual, memory-related, and action responses. Most strikingly, we observed reactivation of the lateral occipital complex (LOC), a ventral-stream area implicated in visual object recognition, and early visual cortex (EVC) at the time of action. Importantly this reactivation was observed even though participants remained in complete darkness with no visual stimulation at the time of the action. Moreover, within EVC, higher activation was observed for grasping than reaching during both vision and action execution. Areas in the dorsal visual stream were activated during action execution as expected and, for some, also during vision. Several areas, including the anterior intraparietal sulcus (aIPS), dorsal premotor cortex (PMd), primary motor cortex (M1) and the supplementary motor area (SMA), showed sustained activation during the delay phase. We propose that during delayed actions, dorsal-stream areas plan and maintain coarse action goals; however, at the time of execution, motor programming requires re-recruitment of detailed visual information about the object through reactivation of (1) ventral-stream areas involved in object perception and (2) early visual areas that contain richly detailed visual representations, particularly for grasping.
Romero-Hall, E.; Watson, G. S.; Adcock, A.; Bliss, J.; Adams Tufts, K.
This research assessed how emotive animated agents in a simulation-based training affect the performance outcomes and perceptions of the individuals interacting in real time with the training application. A total of 56 participants consented to complete the study. The material for this investigation included a nursing simulation in which…
Dewitt, Barry; Fischhoff, Baruch; Davis, Alexander; Broomell, Stephen B.
Lay judgments of environmental risks are central to both immediate decisions (e.g., taking shelter from a storm) and long-term ones (e.g., building in locations subject to storm surges). Using methods from quantitative psychology, we provide a general approach to studying lay perceptions of environmental risks. As a first application of these methods, we investigate a setting where lay decisions have not taken full advantage of advances in natural science understanding: tornado forecasts in the US and Canada. Because official forecasts are imperfect, members of the public must often evaluate the risks on their own, by checking environmental cues (such as cloud formations) before deciding whether to take protective action. We study lay perceptions of cloud formations, demonstrating an approach that could be applied to other environmental judgments. We use signal detection theory to analyse how well people can distinguish tornadic from non-tornadic clouds, and multidimensional scaling to determine how people make these judgments. We find that participants (N = 400 recruited from Amazon Mechanical Turk) have heuristics that generally serve them well, helping participants to separate tornadic from non-tornadic clouds, but which also lead them to misjudge the tornado risk of certain cloud types. The signal detection task revealed confusion regarding shelf clouds, mammatus clouds, and clouds with upper- and mid-level tornadic features, which the multidimensional scaling task suggested was the result of participants focusing on the darkness of the weather scene and the ease of discerning its features. We recommend procedures for training (e.g., for storm spotters) and communications (e.g., tornado warnings) that will reduce systematic misclassifications of tornadicity arising from observers’ reliance on otherwise useful heuristics.
Chen, Zhongzhou; Gladding, Gary
Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition, which leads to a significant variance in their effectiveness. In this paper we propose a cognitive mechanism based on grounded cognition, suggesting that visual perception affects understanding by activating "perceptual symbols": the basic cognitive unit used by the brain to construct a concept. A good visual representation activates perceptual symbols that are essential for the construction of the represented concept, whereas a bad representation does the opposite. As a proof of concept, we conducted a clinical experiment in which participants received three different versions of a multimedia tutorial teaching the integral expression of electric potential. The three versions were only different by the details of the visual representation design, only one of which contained perceptual features that activate perceptual symbols essential for constructing the idea of "accumulation." On a following post-test, participants receiving this version of tutorial significantly outperformed those who received the other two versions of tutorials designed to mimic conventional visual representations used in classrooms.
Leeds, Daniel D; Tarr, Michael J
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond.
Su, Yi-Huang; Jonikaitis, Donatas
The coupling between sensory and motor processes has been established in various scenarios: for example, the perception of auditory rhythm entails an audiomotor representation of the sounds. Similarly, visual action patterns can also be represented via a visuomotor transformation. In this study, we tested the hypothesis that the visual motor information, such as embedded in a coherent motion flow, can interact with the perception of a motor-related aspect in auditory rhythm: the tempo. In the first two experiments, we employed an auditory tempo judgment task where participants listened to a standard auditory sequence while concurrently watching visual stimuli of different motion information, after which they judged the tempo of a comparison sequence related to the standard. In Experiment 1, we found that the same auditory tempo was perceived as faster when it was accompanied by accelerating visual motion than by non-motion luminance change. In Experiment 2, we compared the perceived auditory tempo among three visual motion conditions, increase in speed, decrease in speed, and no speed change, and found the corresponding bias in judgment of auditory tempo: faster than it was, slower than it was, and no bias. In Experiment 3, the perceptual bias induced by the change in motion speed was consistently reflected in the tempo reproduction task. Taken together, these results indicate that between a visual spatiotemporal and an auditory temporal stimulation, the embedded motor representations from each can interact across modalities, leading to a spatial-to-temporal bias. This suggests that the perceptual process in one modality can incorporate concurrent motor information from cross-modal sensory inputs to form a coherent experience.
Swallow, Khena M; Zacks, Jeffrey M; Abrams, Richard A
Memory for naturalistic events over short delays is important for visual scene processing, reading comprehension, and social interaction. The research presented here examined relations between how an ongoing activity is perceptually segmented into events and how those events are remembered a few seconds later. In several studies, participants watched movie clips that presented objects in the context of goal-directed activities. Five seconds after an object was presented, the clip paused for a recognition test. Performance on the recognition test depended on the occurrence of perceptual event boundaries. Objects that were present when an event boundary occurred were better recognized than other objects, suggesting that event boundaries structure the contents of memory. This effect was strongest when an object's type was tested but was also observed for objects' perceptual features. Memory also depended on whether an event boundary occurred between presentation and test; this variable produced complex interactive effects that suggested that the contents of memory are updated at event boundaries. These data indicate that perceptual event boundaries have immediate consequences for what, when, and how easily information can be remembered.
Swallow, Khena M.; Zacks, Jeffrey M.; Abrams, Richard A.
Memory for naturalistic events over short delays is important for visual scene processing, reading comprehension, and social interaction. The research presented here examined relations between how an ongoing activity is perceptually segmented into events and how those events are remembered a few seconds later. In several studies participants watched movie clips that presented objects in the context of goal-directed activities. Five seconds after an object was presented, the clip paused for a recognition test. Performance on the recognition test depended on the occurrence of perceptual event boundaries. Objects that were present when an event boundary occurred were better recognized than other objects, suggesting that event boundaries structure the contents of memory. This effect was strongest when an object’s type was tested, but was also observed for objects’ perceptual features. Memory also depended on whether an event boundary occurred between presentation and test; this variable produced complex interactive effects that suggested that the contents of memory are updated at event boundaries. These data indicate that perceptual event boundaries have immediate consequences for what, when, and how easily information can be remembered. PMID:19397382
O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation
Rosenblatt, Steven David; Crane, Benjamin Thomas
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the
Evers, Kris; Noens, Ilse; Steyaert, Jean; Wagemans, Johan
Background: Children with an autism spectrum disorder (ASD) are known to have an atypical visual perception, with deficits in automatic Gestalt formation and an enhanced processing of visual details. In addition, they are sometimes found to have difficulties in emotion processing. Methods: In three experiments, we investigated whether 7-to-11-year…
Roberson, Debi; Pak, Hyensou; Hanley, J. Richard
In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is…
Rüsseler, Jascha; Ye, Zheng; Gerth, Ivonne; Szycik, Gregor R; Münte, Thomas F
Developmental dyslexia is a specific deficit in reading and spelling that often persists into adulthood. In the present study, we used slow event-related fMRI and independent component analysis to identify brain networks involved in perception of audio-visual speech in a group of adult readers with dyslexia (RD) and a group of fluent readers (FR). Participants saw a video of a female speaker saying a disyllabic word. In the congruent condition, audio and video input were identical whereas in the incongruent condition, the two inputs differed. Participants had to respond to occasionally occurring animal names. The independent components analysis (ICA) identified several components that were differently modulated in FR and RD. Two of these components including fusiform gyrus and occipital gyrus showed less activation in RD compared to FR possibly indicating a deficit to extract face information that is needed to integrate auditory and visual information in natural speech perception. A further component centered on the superior temporal sulcus (STS) also exhibited less activation in RD compared to FR. This finding is corroborated in the univariate analysis that shows less activation in STS for RD compared to FR. These findings suggest a general impairment in recruitment of audiovisual processing areas in dyslexia during the perception of natural speech.
Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals' preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory 'subdivision effect', suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception.
Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals’ preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory ‘subdivision effect’, suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception. PMID:26947252
Moss, Heather E; Samelson, Monica; Mohan, Girish; Jiang, Qin Li
The afferent visual system may be affected by neuro-degeneration in amyotrophic lateral sclerosis (ALS) based on observations of visual function impairment and retinal inclusions on histopathology in ALS patients. To test the hypothesis that visual acuity is impaired in ALS, we compared three measures of visual acuity in ALS patients (n = 25) attending a multidisciplinary ALS clinic and age matched control subjects (n = 25). Bilateral monocular and binocular visual acuities were assessed using high contrast (black letters on white background) and low contrast (2.5%, 1.25% grey letters on white background) visual acuity charts under controlled lighting conditions following refraction. Binocular summation was calculated as the difference between binocular and best monocular acuity scores. There were no associations between binocular or monocular high contrast visual acuity or low contrast visual acuity and amyotrophic lateral sclerosis diagnosis (generalized estimating equation models accounting for age). Binocular summation was similar in both amyotrophic lateral sclerosis and control subjects. There was a small magnitude association between increased duration of ALS symptoms and reduced 1.25% low contrast visual acuity. This study does not confirm prior observations of impaired visual acuity in patients with amyotrophic lateral sclerosis and does not support this particular measure of visual function for use in broad scale assessment of visual pathway involvement in ALS patients.
Samelson, Monica; Mohan, Girish; Jiang, Qin Li
The afferent visual system may be affected by neuro-degeneration in amyotrophic lateral sclerosis (ALS) based on observations of visual function impairment and retinal inclusions on histopathology in ALS patients. To test the hypothesis that visual acuity is impaired in ALS, we compared three measures of visual acuity in ALS patients (n = 25) attending a multidisciplinary ALS clinic and age matched control subjects (n = 25). Bilateral monocular and binocular visual acuities were assessed using high contrast (black letters on white background) and low contrast (2.5%, 1.25% grey letters on white background) visual acuity charts under controlled lighting conditions following refraction. Binocular summation was calculated as the difference between binocular and best monocular acuity scores. There were no associations between binocular or monocular high contrast visual acuity or low contrast visual acuity and amyotrophic lateral sclerosis diagnosis (generalized estimating equation models accounting for age). Binocular summation was similar in both amyotrophic lateral sclerosis and control subjects. There was a small magnitude association between increased duration of ALS symptoms and reduced 1.25% low contrast visual acuity. This study does not confirm prior observations of impaired visual acuity in patients with amyotrophic lateral sclerosis and does not support this particular measure of visual function for use in broad scale assessment of visual pathway involvement in ALS patients. PMID:28033389
Kowalski, Ireneusz M.; Domagalska, Małgorzata; Szopa, Andrzej; Dwornik, Michał; Kujawa, Jolanta; Stępień, Agnieszka; Śliwiński, Zbigniew
Introduction Central nervous system damage in early life results in both quantitative and qualitative abnormalities of psychomotor development. Late sequelae of these disturbances may include visual perception disorders which not only affect the ability to read and write but also generally influence the child's intellectual development. This study sought to determine whether a central coordination disorder (CCD) in early life treated according to Vojta's method with elements of the sensory integration (S-I) and neuro-developmental treatment (NDT)/Bobath approaches affects development of visual perception later in life. Material and methods The study involved 44 participants aged 15-16 years, including 19 diagnosed with moderate or severe CCD in the neonatal period, i.e. during the first 2-3 months of life, with diagnosed mild degree neonatal encephalopathy due to perinatal anoxia, and 25 healthy people without a history of developmental psychomotor disturbances in the neonatal period. The study tool was a visual perception IQ test comprising 96 graphic tasks. Results The study revealed equal proportions of participants (p < 0.05) defined as very skilled (94-96), skilled (91-94), aerage (71-91), poor (67-71), and very poor (0-67) in both groups. These results mean that adolescents with a history of CCD in the neonatal period did not differ with regard to the level of visual perception from their peers who had not demonstrated psychomotor development disorders in the neonatal period. Conclusions Early treatment of children with CCD affords a possibility of normalising their psychomotor development early enough to prevent consequences in the form of cognitive impairments in later life. PMID:23185199
McCane, Sara Jean
The Motor-Free Visual Perception Test: Third edition (MVPT-3; Colarusso & Hammill, 2003) purports to measure overall visual perceptual ability. Task responses require no motor ability, eliminating the effect of motor performance on the overall visual perception score. The test authors suggested that this MVPT-3 characteristic allows for its…
Villamor, Remedios R; Ross, Carolyn F
Wine is a complex alcoholic beverage. The wine matrix or the components that are present in the wine play an important role in the perceived aroma and flavor of the wine. The wine matrix is composed of two fractions, the nonvolatile fraction, which includes ethanol (in liquid phase), polyphenolic compounds, proteins, and carbohydrates, and the volatile fraction, which incorporates flavor and aroma compounds. Interactions among these compounds may arise through various mechanisms, thus affecting the sensory and chemical properties of the wine. The main focus of this review is to highlight recent research on wine component interactions and their effects on perceived aroma in the wine. An overview of the wine impact odorants and their determination using sensory and chemical methods is also provided in this paper.
Discusses the prevalence of people with visual impairment and trends affecting prevalence, including increased overall populations and a growth in the older population, greater ability to preserve lives of high-risk populations, improved fitness, medical advances in prevention, expanding role of computers among other increasing visual demands, and…
Berger, Christopher C.; Ehrsson, H. Henrik
Can what we imagine hearing change what we see? Whether imagined sensory stimuli are integrated with external sensory stimuli to shape our perception of the world has only recently begun to come under scrutiny. Here, we made use of the cross-bounce illusion in which an auditory stimulus presented at the moment two passing objects meet promotes the perception that the objects bounce off rather than cross by one another to examine whether the content of imagined sound changes visual motion perception in a manner that is consistent with multisensory integration. The results from this study revealed that auditory imagery of a sound with acoustic properties typical of a collision (i.e., damped sound) promoted the bounce-percept, but auditory imagery of the same sound played backwards (i.e., ramped sound) did not. Moreover, the vividness of the participants’ auditory imagery predicted the strength of this imagery-induced illusion. In a separate experiment, we ruled out the possibility that changes in attention (i.e., sensitivity index d′) or response bias (response bias index c) were sufficient to explain this effect. Together, these findings suggest that this imagery-induced multisensory illusion reflects the successful integration of real and imagined cross-modal sensory stimuli, and more generally, that what we imagine hearing can change what we see. PMID:28071707
This project consists of two parts. In Part 1, well logs, other well data, drilling, and production data for the Pioneer Field in the southern San Joaquin Valley of California were obtained, assembled, and input to a commercial relational database manager. These data are being used in PC-based geologic mapping, evaluation, and visualization software programs to produce 2-D and 3-D representations of the reservoir geometry, facies and subfacies, stratigraphy, porosity, oil saturation, and other measured and model parameters. Petrographic and petrophysical measurements made on samples from Pioneer Field, including core, cuttings and liquids, are being used to calibrate the log suite. In Part 2, these data sets are being used to develop algorithms to correlate log response to geologic and engineering measurements. Rock alteration due to interactions with hot fluids are being quantitatively modeled and used to predict the reservoir response if the rock were subjected to thermally enhanced oil recovery (TEOR).
Maloney, Erin A; Risko, Evan F; Ansari, Daniel; Fugelsang, Jonathan
Individuals with mathematics anxiety have been found to differ from their non-anxious peers on measures of higher-level mathematical processes, but not simple arithmetic. The current paper examines differences between mathematics anxious and non-mathematics anxious individuals in more basic numerical processing using a visual enumeration task. This task allows for the assessment of two systems of basic number processing: subitizing and counting. Mathematics anxious individuals, relative to non-mathematics anxious individuals, showed a deficit in the counting but not in the subitizing range. Furthermore, working memory was found to mediate this group difference. These findings demonstrate that the problems associated with mathematics anxiety exist at a level more basic than would be predicted from the extant literature.
Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh
Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636
Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh
Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups.
Reichelt, Stephan; Häussler, Ralf; Fütterer, Gerald; Leister, Norbert
Over the last decade, various technologies for visualizing three-dimensional (3D) scenes on displays have been technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral imaging, volumetric, or holographic type. Most of the current approaches utilize the conventional stereoscopic principle. But they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically coupled oculomotor processes of vergence and eye focus that may cause visual discomfort and fatigue. This paper discusses the depth cues in the human visual perception for both image quality and visual comfort of direct-view 3D displays. We concentrate our analysis especially on near-range depth cues, compare visual performance and depth-range capabilities of stereoscopic and holographic displays, and evaluate potential depth limitations of 3D displays from a physiological point of view.
Freud, Erez; Ganel, Tzvi; Avidan, Galia; Gilaie-Dotan, Sharon
According to the two visual systems model, the cortical visual system is segregated into a ventral pathway mediating object recognition, and a dorsal pathway mediating visuomotor control. In the present study we examined whether the visual control of action could develop normally even when visual perceptual abilities are compromised from early childhood onward. Using his fingers, LG, an individual with a rare developmental visual object agnosia, manually estimated (perceptual condition) the width of blocks that varied in width and length (but not in overall size), or simply picked them up across their width (grasping condition). LG's perceptual sensitivity to target width was profoundly impaired in the manual estimation task compared to matched controls. In contrast, the sensitivity to object shape during grasping, as measured by maximum grip aperture (MGA), the time to reach the MGA, the reaction time and the total movement time were all normal in LG. Further analysis, however, revealed that LG's sensitivity to object shape during grasping emerged at a later time stage during the movement compared to controls. Taken together, these results demonstrate a dissociation between action and perception of object shape, and also point to a distinction between different stages of the grasping movement, namely planning versus online control. Moreover, the present study implies that visuomotor abilities can develop normally even when perceptual abilities developed in a profoundly impaired fashion.
Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.
We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…
The purpose of this study is to examine the perceptions of children in preschool education with regard to the value of affection in the pictures they draw. The study involved 199 children aged 60 months old or above. The descriptive research method was used and data were collected with the draw-and-explain technique. During the collection of the…
Jelfs, Anne; Colbourn, Chris
Discusses the use of communication and information technology (C&IT) in higher education in the United Kingdom and describes research that examined student perceptions of using C&IT for a virtual seminar series in psychology. Identified student learning approaches within the group and how it affected their adoption or rejection of the…
Mapuranga, Barbra; Musingafi, Maxwell C. C.; Zebron, Shupikai
Some educators argue that entry standards are the most important determinants of successful completion of a university programme; others maintain that non-academic factors must also be considered. In this study we sought to investigate open and distance learning students' perceptions of the factors affecting academic performance and successful…
This study examined different factors affecting the perceptions of barriers in academic achievement of Latino K-12 students. The study used data from 1,508 participants who identified themselves as being of Hispanic or Latino heritage in the 2004 National Survey of Latinos: Education, compiled by the Pew Hispanic Center between August 7 and…
Cho, Hyeon; Yoo, Jeong-Ju; Johnson, Kim K. P.
Counterfeiting is a serious problem facing several industries, including the medical, agricultural, and apparel industries (Bloch, Bush, & Campbell, 1993). The authors investigated whether ethical viewpoints affect perceptions of the morality of particular shopping behaviors, attitudes toward counterfeit products, and intentions to purchase such…
Regan, D; Murray, T J; Silver, R
Seven multiple sclerosis patients were cooled and four heated, but evoked potential delay changed in only five out 11 experiments. Control limits were set by cooling eight and heating four control subjects. One patient gave anomalous results in that although heating degraded perceptual delay and visual acuity, and depressed the sine wave grating MTF, double-flash resolution was improved. An explanation is proposed in terms of the pattern of axonal demyelination. The medium frequency flicker evoked potential test seems to be a less reliable means of monitoring the progress of demyelination in multiple sclerosis patients than is double-flash campimetry or perceptual delay campimetry, although in some situations the objectivity of the evoked potential test would be advantageous. PMID:599356
Terpstra, Teun; Lindell, Michael K; Gutteling, Jan M
People's risk perceptions are generally regarded as an important determinant of their decisions to adjust to natural hazards. However, few studies have evaluated how risk communication programs affect these risk perceptions. This study evaluates the effects of a small-scale flood risk communication program in the Netherlands, consisting of workshops and focus group discussions. The effects on the workshop participants' (n = 24) and focus group participants' (n = 16) flood risk perceptions were evaluated in a pretest-posttest control group (n = 40) design that focused on two mechanisms of attitude change-direct personal experience and attitude polarization. We expected that (H1) workshop participants would show greater shifts in their flood risk perceptions compared with control group participants and that (H2) focus groups would rather produce the conditions for attitude polarization (shifts toward more extreme attitudinal positions after group discussion). However, the results provide only modest support for these hypotheses, perhaps because of a mismatch between the sessions' contents and the risk perception measures. An important contribution of this study is that it examined risk perception data by both conventional tests of the mean differences and tests for attitude polarization. Moreover, the possibility that attitude polarization could cause people to confirm their preexisting (hazard) beliefs could have important implications for risk communication.
Clarke, Aaron M.; Herzog, Michael H.; Francis, Gregory
Experimentalists tend to classify models of visual perception as being either local or global, and involving either feedforward or feedback processing. We argue that these distinctions are not as helpful as they might appear, and we illustrate these issues by analyzing models of visual crowding as an example. Recent studies have argued that crowding cannot be explained by purely local processing, but that instead, global factors such as perceptual grouping are crucial. Theories of perceptual grouping, in turn, often invoke feedback connections as a way to account for their global properties. We examined three types of crowding models that are representative of global processing models, and two of which employ feedback processing: a model based on Fourier filtering, a feedback neural network, and a specific feedback neural architecture that explicitly models perceptual grouping. Simulations demonstrate that crucial empirical findings are not accounted for by any of the models. We conclude that empirical investigations that reject a local or feedforward architecture offer almost no constraints for model construction, as there are an uncountable number of global and feedback systems. We propose that the identification of a system as being local or global and feedforward or feedback is less important than the identification of a system's computational details. Only the latter information can provide constraints on model development and promote quantitative explanations of complex phenomena. PMID:25374554
Domingues, Diana; Miosso, Cristiano J.; Rodrigues, Suélia F.; Silva Rocha Aguiar, Carla; Lucena, Tiago F.; Miranda, Mateus; Rocha, Adson F.; Raskar, Ramesh
Our proposal in Bioart and Biomedical Engineering for a ective esthetics focuses on the expanded sensorium and investigates problems regarding enactive systems. These systems enhance the sensorial experiences and amplify kinesthesia by adding the sensations that are formed in response to the physical world, which aesthetically constitutes the principle of synaesthesia. In this paper, we also present enactive systems inside the CAVE, con guring compelling experiences in data landscapes and human a ective narratives. The interaction occurs through the acquisition, data visualization and analysis of several synchronized physiological signals, to which the landscapes respond and provide immediate feedback, according to the detected participants' actions and the intertwined responses of the environment. The signals we use to analyze the human states include the electrocardiography (ECG) signal, the respiratory ow, the galvanic skin response (GSR) signal, plantar pressures, the pulse signal and others. Each signal is collected by using a speci cally designed dedicated electronic board, with reduced dimensions, so it does not interfere with normal movements, according to the principles of transparent technologies. Also, the electronic boards are implemented in a modular approach, so they are independent, and can be used in many di erent desired combinations, and at the same time provide synchronization between the collected data.
Ho, Yun-Xian; Landy, Michael S.; Maloney, Laurence T.
We examined visual estimation of surface roughness using random computer-generated three-dimensional (3D) surfaces rendered under a mixture of diffuse lighting and a punctate source. The angle between the tangent to the plane containing the surface texture and the direction to the punctate source was varied from 50 to 70 degrees across lighting conditions. Observers were presented with pairs of surfaces under different lighting conditions and indicated which 3D surface appeared rougher. Surfaces were viewed either in isolation or in scenes with added objects whose shading, cast shadows and specular highlights provided information about the spatial distribution of illumination. All observers perceived surfaces to be markedly rougher with decreasing illuminant angle. Performance in scenes with added objects was no closer to constant than that in scenes without added objects. We identified four novel cues that are valid cues to roughness under any single lighting condition but that are not invariant under changes in lighting condition. We modeled observers’ deviations from roughness constancy as a weighted linear combination of these “pseudo-cues” and found that they account for a substantial amount of observers’ systematic deviations from roughness constancy with changes in lighting condition. PMID:16881794
This project will provide a detailed example, based on a field trial, of how to evaluate a field for EOR operations utilizing data typically available in an older field which has under gone primary development. The approach will utilize readily available, affordable PC-based computer software and analytical services. This study will illustrate the steps involved in: (1) setting up a relational database to store geologic, well-log, engineering, and production data, (2) integration of data typically available for oil and gas fields with predictive models for reservoir alteration, and (3) linking these data and models with modern computer software to provide 2-D and 3-D visualizations of the reservoir and its attributes. The techniques are being demonstrated through a field trial on a reservoir, Pioneer Field, a field that produces from the Monterey Formation, which is a candidate for thermal EOR. Technical progress is summarized for the following tasks: (1) project administration and management; (2) data collection; (3) data analysis and measurement; (4) modeling; and (5) technology transfer.
Yamada, Yuki; Yamani, Yusuke
Perceived objects automatically potentiate afforded action. Object affordances also facilitate perception of such objects, and this occurrence is known as the affordance effect. This study examined whether object affordances facilitate the initial visual processing stage, or perceptual entry processes, using the temporal order judgment task. The onset of the graspable (right-handled) coffee cup was perceived earlier than that of the less graspable (left-handled) cup for right-handed participants. The affordance effect was eliminated when the coffee cups were inverted, which presumably conveyed less affordance information. These results suggest that objects preattentively potentiate the perceptual entry processes in response to their affordances. PMID:27698991
Barrera, Alejandra; Laschi, Cecilia
Anticipation of sensory consequences of actions is critical for the predictive control of movement that explains most of our sensory-motor behaviors. Plenty of neuroscientific studies in humans suggest evidence of anticipatory mechanisms based on internal models. Several robotic implementations of predictive behaviors have been inspired on those biological mechanisms in order to achieve adaptive agents. This paper provides an overview of such neuroscientific and robotic evidences; a high-level architecture of sensory-motor coordination based on anticipatory visual perception and internal models is then introduced; and finally, the paper concludes by discussing the relevance of the proposed architecture within the context of current research in humanoid robotics.
Koldewyn, Kami; Hanus, Patricia; Balas, Benjamin
One critical component of understanding another’s mind is the perception of “life” in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species-specific, but not constrained by age categories. PMID:24323739
Fahrenfort, Johannes J; Snijders, Tineke M; Heinen, Klaartje; van Gaal, Simon; Scholte, H Steven; Lamme, Victor A F
The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning.
Wilson, Christopher J.; Soranzo, Alessandro
Recent proliferation of available virtual reality (VR) tools has seen increased use in psychological research. This is due to a number of advantages afforded over traditional experimental apparatus such as tighter control of the environment and the possibility of creating more ecologically valid stimulus presentation and response protocols. At the same time, higher levels of immersion and visual fidelity afforded by VR do not necessarily evoke presence or elicit a “realistic” psychological response. The current paper reviews some current uses for VR environments in psychological research and discusses some ongoing questions for researchers. Finally, we focus on the area of visual perception, where both the advantages and challenges of VR are particularly salient. PMID:26339281
Gall, Carolin; Geier, Jens-Stefan; Sabel, Bernhard A; Kasten, Erich
21 subjects (mean age 28,4 +/- 10,9, M +/- SD) without any damage of the visual system were examined with computer-based campimetric tests of near threshold stimulus detection whereby an artificial tunnel vision was induced. Campimetry was performed in four trials in randomized order using a within-subjects-design: 1. classical music, 2. Techno music, 3. music for relaxation and 4. no music. Results were slightly better in all music conditions. Performance was best when subjects were listening to Techno music. The average increase of correctly recognized stimuli and fixation controls amounted to 3 %. To check the stability of the effects 9 subjects were tested three times. A moderating influence of personality traits and habits of listening to music was tested but could not be found. We conclude that music has at least no negative influence on performance in the campimetric measurement. Reasons for the positive effects of music can be seen in a general increase of vigilance and a modulation of perceptual thresholds.
PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.
Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648
Berthoz, A.; Pavard, B.; Young, L. R.
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
Abbasi, Irum Saeed
The influence of neuroticism on stress perception and its associated negative affect is explored in a quasi-experimental repeated measure study. The study involves manipulating the stress perception and affect of high N group (n = 24) and low N group (n = 28) three times; first, through exposure to neutral stimuli; second, through exposure to a laboratory stressor; third, through exposure to positive stimuli. The results reveal that after exposure to neutral stimuli, there is a significant difference in the baseline Perceived Stress Scores (PSS) (p = .005) and Negative Affect (NA) scores (p = .001) of the two groups. During the stress task, however, both groups show a non-significant difference in the PSS (p = .200) and NA scores (p = .367). After exposure to positive stimuli, there is a significant difference in the PSS scores (p = .001), but a non-significant difference in the NA scores (p = .661) of the two groups. When compared across three conditions, the high N group report significantly higher perceived stress (p = .002), but not significantly higher negative affect (p = .123) than the low N group. Finally for both PSS and NA scores, there is no interaction between neuroticism and any of the three treatment conditions (p = .176; p = .338, respectively). This study shows that the high N group may be at risk for health disparities due to maintaining a chronic higher baseline stress perception and negative affect state under neutral conditions, than the low N group. Implications of the study are discussed.
Tsetserukou, D.; Neviarouskaya, A.
The paper focuses on a novel concept of emotional telepresence. The iFeel_IM! system which is in the vanguard of this technology integrates 3D virtual world Second Life, intelligent component for automatic emotion recognition from text messages, and innovative affective haptic interfaces providing additional nonverbal communication channels through simulation of emotional feedback and social touch (physical co-presence). Users can not only exchange messages but also emotionally and physically feel the presence of the communication partner (e.g., family member, friend, or beloved person). The next prototype of the system will include the tablet computer. The user can realize haptic interaction with avatar, and thus influence its mood and emotion of the partner. The finger gesture language will be designed for communication with avatar. This will bring new level of immersion of on-line communication.
Leow, Li-Ann; Parrott, Taylor; Grahn, Jessica A
Slowed gait in patients with Parkinson's disease (PD) can be improved when patients synchronize footsteps to isochronous metronome cues, but limited retention of such improvements suggest that permanent cueing regimes are needed for long-term improvements. If so, music might make permanent cueing regimes more pleasant, improving adherence; however, music cueing requires patients to synchronize movements to the "beat," which might be difficult for patients with PD who tend to show weak beat perception. One solution may be to use high-groove music, which has high beat salience that may facilitate synchronization, and affective properties, which may improve motivation to move. As a first step to understanding how beat perception affects gait in complex neurological disorders, we examined how beat perception ability affected gait in neurotypical adults. Synchronization performance and gait parameters were assessed as healthy young adults with strong or weak beat perception synchronized to low-groove music, high-groove music, and metronome cues. High-groove music was predicted to elicit better synchronization than low-groove music, due to its higher beat salience. Two musical tempi, or rates, were used: (1) preferred tempo: beat rate matched to preferred step rate and (2) faster tempo: beat rate adjusted to 22.5% faster than preferred step rate. For both strong and weak beat-perceivers, synchronization performance was best with metronome cues, followed by high-groove music, and worst with low-groove music. In addition, high-groove music elicited longer and faster steps than low-groove music, both at preferred tempo and at faster tempo. Low-groove music was particularly detrimental to gait in weak beat-perceivers, who showed slower and shorter steps compared to uncued walking. The findings show that individual differences in beat perception affect gait when synchronizing footsteps to music, and have implications for using music in gait rehabilitation.
Sako, Wataru; Fujita, Koji; Vo, An; Rucker, Janet C; Rizzo, John-Ross; Niethammer, Martin; Carbon, Maren; Bressman, Susan B; Uluğ, Aziz M; Eidelberg, David
Although primary dystonia is defined by its characteristic motor manifestations, non-motor signs and symptoms have increasingly been recognized in this disorder. Recent neuroimaging studies have related the motor features of primary dystonia to connectivity changes in cerebello-thalamo-cortical pathways. It is not known, however, whether the non-motor manifestations of the disorder are associated with similar circuit abnormalities. To explore this possibility, we used functional magnetic resonance imaging to study primary dystonia and healthy volunteer subjects while they performed a motion perception task in which elliptical target trajectories were visually tracked on a computer screen. Prior functional magnetic resonance imaging studies of healthy subjects performing this task have revealed selective activation of motor regions during the perception of 'natural' versus 'unnatural' motion (defined respectively as trajectories with kinematic properties that either comply with or violate the two-thirds power law of motion). Several regions with significant connectivity changes in primary dystonia were situated in proximity to normal motion perception pathways, suggesting that abnormalities of these circuits may also be present in this disorder. To determine whether activation responses to natural versus unnatural motion in primary dystonia differ from normal, we used functional magnetic resonance imaging to study 10 DYT1 dystonia and 10 healthy control subjects at rest and during the perception of 'natural' and 'unnatural' motion. Both groups exhibited significant activation changes across perceptual conditions in the cerebellum, pons, and subthalamic nucleus. The two groups differed, however, in their responses to 'natural' versus 'unnatural' motion in these regions. In healthy subjects, regional activation was greater during the perception of natural (versus unnatural) motion (P < 0.05). By contrast, in DYT1 dystonia subjects, activation was relatively greater
Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit
Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further.
Sim, Eun-Jin; Helbig, Hannah B; Graf, Markus; Kiefer, Markus
Recent evidence suggests an interaction between the ventral visual-perceptual and dorsal visuo-motor brain systems during the course of object recognition. However, the precise function of the dorsal stream for perception remains to be determined. The present study specified the functional contribution of the visuo-motor system to visual object recognition using functional magnetic resonance imaging and event-related potential (ERP) during action priming. Primes were movies showing hands performing an action with an object with the object being erased, followed by a manipulable target object, which either afforded a similar or a dissimilar action (congruent vs. incongruent condition). Participants had to recognize the target object within a picture-word matching task. Priming-related reductions of brain activity were found in frontal and parietal visuo-motor areas as well as in ventral regions including inferior and anterior temporal areas. Effective connectivity analyses suggested functional influences of parietal areas on anterior temporal areas. ERPs revealed priming-related source activity in visuo-motor regions at about 120 ms and later activity in the ventral stream at about 380 ms. Hence, rapidly initiated visuo-motor processes within the dorsal stream functionally contribute to visual object recognition in interaction with ventral stream processes dedicated to visual analysis and semantic integration.
Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann
The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.
Mäthger, Lydia M; Barbosa, Alexandra; Miner, Simon; Hanlon, Roger T
We tested color perception based upon a robust behavioral response in which cuttlefish (Sepia officinalis) respond to visual stimuli (a black and white checkerboard) with a quantifiable, neurally controlled motor response (a body pattern). In the first experiment, we created 16 checkerboard substrates in which 16 grey shades (from white to black) were paired with one green shade (matched to the maximum absorption wavelength of S. officinalis' sole visual pigment, 492 nm), assuming that one of the grey shades would give a similar achromatic signal to the tested green. In the second experiment, we created a checkerboard using one blue and one yellow shade whose intensities were matched to the cuttlefish's visual system. In both assays it was tested whether cuttlefish would show disruptive coloration on these checkerboards, indicating their ability to distinguish checkers based solely on wavelength (i.e., color). Here, we show clearly that cuttlefish must be color blind, as they showed non-disruptive coloration on the checkerboards whose color intensities were matched to the Sepia visual system, suggesting that the substrates appeared to their eyes as uniform backgrounds. Furthermore, we show that cuttlefish are able to perceive objects in their background that differ in contrast by approximately 15%. This study adds support to previous reports that S. officinalis is color blind, yet the question of how cuttlefish achieve "color-blind camouflage" in chromatically rich environments still remains.
Grant, Ken W.; van Wassenhove, Virginie
Auditory-visual speech perception has been shown repeatedly to be both more accurate and more robust than auditory speech perception. Attempts to explain these phenomena usually treat acoustic and visual speech information (i.e., accessed via speechreading) as though they were derived from independent processes. Recent electrophysiological (EEG) studies, however, suggest that visual speech processes may play a fundamental role in modulating the way we hear. For example, both the timing and amplitude of auditory-specific event-related potentials as recorded by EEG are systematically altered when speech stimuli are presented audiovisually as opposed to auditorilly. In addition, the detection of a speech signal in noise is more readily accomplished when accompanied by video images of the speaker's production, suggesting that the influence of vision on audition occurs quite early in the perception process. But the impact of visual cues on what we ultimately hear is not limited to speech. Our perceptions of loudness, timbre, and sound source location can also be influenced by visual cues. Thus, for speech and nonspeech stimuli alike, predicting a listener's response to sound based on acoustic engineering principles alone may be misleading. Examples of acoustic-visual interactions will be presented which highlight the multisensory nature of our hearing experience.
Brancazio, Lawrence; Miller, Joanne L
The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onset-time continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audio-visual integration.
Jastrzebski, Mikolaj; Bala, Aleksandra
Psilocybin is a substance of natural origin, occurring in hallucinogenic mushrooms (most common in the Psilocybe family). After its synthesis in 1958 research began on its psychoactive properties, particularly strong effects on visual perception and spatial orientation. Due to the very broad spectrum of psilocybin effects research began on the different ranges of its actions--including the effect on physiological processes (such as eye saccada movements). Neuro-imaging and neurophysiological research (positron emission tomography-PET and electroencephalography-EEG), indicate a change in the rate of metabolism of the brain and desync cerebral hemispheres. Experimental studies show the changes in visual perception and distortion from psilocybin in the handwriting style of patients examined. There are widely described subjective experiences reported by the subjects. There are also efforts to apply testing via questionnaire on people under the influence of psilocybin, in the context of the similarity of psilocybin-induced state to the initial stages of schizophrenia, as well as research aimed at creating an 'artificial' model of the disease.
Wallbrown, J D; Wallbrown, F H; Engin, A W
The study investigated the relative efficiency of the Bender and MPD as assessors of achievement-related errors in visual-motor perception. Clinical experience with these two tests suggests that beyond first grade the MPD is more sensitive than the Bender for purposes of measuring deficits in visual-motor perception that interfere with effective classroom learning. The sample was composed of 153 third-grade children from two upper-middle-class elementary schools in a surburban school system in central Ohio. For three of the four achievement criteria, the results were clearly congruent with the hypothesis stated above. That is, SpCD errors from the MPD not only showed significantly higher negative rs with the criteria (reading vocabulary, reading comprehension, and mathematics computation) than Koppitz errors from the Bender, but also accounted for a much higher proportion of the variance in these criteria. Thus, the findings suggest that psychologists engaged in the assessment of older children seriously should consider adding the MPD to their assessment battery.
Giabbiconi, Claire-Marie; Jurilj, Verena; Gruber, Thomas; Vocks, Silja
In cognitive neuroscience, interest in the neuronal basis underlying the processing of human bodies is steadily increasing. Based on functional magnetic resonance imaging studies, it is assumed that the processing of pictures of human bodies is anchored in a network of specialized brain areas comprising the extrastriate and the fusiform body area (EBA, FBA). An alternative to examine the dynamics within these networks is electroencephalography, more specifically so-called steady-state visually evoked potentials (SSVEPs). In SSVEP tasks, a visual stimulus is presented repetitively at a predefined flickering rate and typically elicits a continuous oscillatory brain response at this frequency. This brain response is characterized by an excellent signal-to-noise ratio-a major advantage for source reconstructions. The main goal of present study was to demonstrate the feasibility of this method to study human body perception. To that end, we presented pictures of bodies and contrasted the resulting SSVEPs to two control conditions, i.e., non-objects and pictures of everyday objects (chairs). We found specific SSVEPs amplitude differences between bodies and both control conditions. Source reconstructions localized the SSVEP generators to a network of temporal, occipital and parietal areas. Interestingly, only body perception resulted in activity differences in middle temporal and lateral occipitotemporal areas, most likely reflecting the EBA/FBA.
Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.
Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.
Asano, Kohei; Taki, Yasuyuki; Hashizume, Hiroshi; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Takeuchi, Hikaru; Kawashima, Ryuta
Humans perceive textual and nontextual information in visual perception, and both depend on language. In childhood education, students exhibit diverse perceptual abilities, such that some students process textual information better and some process nontextual information better. These predispositions involve many factors, including cognitive ability and learning preference. However, the relationship between verbal and nonverbal cognitive abilities and brain activation during visual perception has not yet been examined in children. We used functional magnetic resonance imaging to examine the relationship between nonverbal and verbal cognitive abilities and brain activation during nontextual visual perception in large numbers of children. A significant positive correlation was found between nonverbal cognitive abilities and brain activation in the right temporoparietal junction, which is thought to be related to attention reorienting. This significant positive correlation existed only in boys. These findings suggested that male brain activation differed from female brain activation, and that this depended on individual cognitive processes, even if there was no gender difference in behavioral performance.
Rojas, David; Kapralos, Bill; Cristancho, Sayra; Collins, Karen; Hogue, Andrew; Conati, Cristina; Dubrowski, Adam
Despite the benefits associated with virtual learning environments and serious games, there are open, fundamental issues regarding simulation fidelity and multi-modal cue interaction and their effect on immersion, transfer of knowledge, and retention. Here we describe the results of a study that examined the effect of ambient (background) sound on the perception of visual fidelity (defined with respect to texture resolution). Results suggest that the perception of visual fidelity is dependent on ambient sound and more specifically, white noise can have detrimental effects on our perception of high quality visuals. The results of this study will guide future studies that will ultimately aid in developing an understanding of the role that fidelity, and multi-modal interactions play with respect to knowledge transfer and retention for users of virtual simulations and serious games.
Berry, Meredith S; Repke, Meredith A; Nickerson, Norma P; Conway, Lucian G; Odum, Amy L; Jordan, Kerry E
Impulsivity in delay discounting is associated with maladaptive behaviors such as overeating and drug and alcohol abuse. Researchers have recently noted that delay discounting, even when measured by a brief laboratory task, may be the best predictor of human health related behaviors (e.g., exercise) currently available. Identifying techniques to decrease impulsivity in delay discounting, therefore, could help improve decision-making on a global scale. Visual exposure to natural environments is one recent approach shown to decrease impulsive decision-making in a delay discounting task, although the mechanism driving this result is currently unknown. The present experiment was thus designed to evaluate not only whether visual exposure to natural (mountains, lakes) relative to built (buildings, cities) environments resulted in less impulsivity, but also whether this exposure influenced time perception. Participants were randomly assigned to either a natural environment condition or a built environment condition. Participants viewed photographs of either natural scenes or built scenes before and during a delay discounting task in which they made choices about receiving immediate or delayed hypothetical monetary outcomes. Participants also completed an interval bisection task in which natural or built stimuli were judged as relatively longer or shorter presentation durations. Following the delay discounting and interval bisection tasks, additional measures of time perception were administered, including how many minutes participants thought had passed during the session and a scale measurement of whether time "flew" or "dragged" during the session. Participants exposed to natural as opposed to built scenes were less impulsive and also reported longer subjective session times, although no differences across groups were revealed with the interval bisection task. These results are the first to suggest that decreased impulsivity from exposure to natural as opposed to built
Berry, Meredith S.; Repke, Meredith A.; Nickerson, Norma P.; Conway, Lucian G.; Odum, Amy L.; Jordan, Kerry E.
Impulsivity in delay discounting is associated with maladaptive behaviors such as overeating and drug and alcohol abuse. Researchers have recently noted that delay discounting, even when measured by a brief laboratory task, may be the best predictor of human health related behaviors (e.g., exercise) currently available. Identifying techniques to decrease impulsivity in delay discounting, therefore, could help improve decision-making on a global scale. Visual exposure to natural environments is one recent approach shown to decrease impulsive decision-making in a delay discounting task, although the mechanism driving this result is currently unknown. The present experiment was thus designed to evaluate not only whether visual exposure to natural (mountains, lakes) relative to built (buildings, cities) environments resulted in less impulsivity, but also whether this exposure influenced time perception. Participants were randomly assigned to either a natural environment condition or a built environment condition. Participants viewed photographs of either natural scenes or built scenes before and during a delay discounting task in which they made choices about receiving immediate or delayed hypothetical monetary outcomes. Participants also completed an interval bisection task in which natural or built stimuli were judged as relatively longer or shorter presentation durations. Following the delay discounting and interval bisection tasks, additional measures of time perception were administered, including how many minutes participants thought had passed during the session and a scale measurement of whether time "flew" or "dragged" during the session. Participants exposed to natural as opposed to built scenes were less impulsive and also reported longer subjective session times, although no differences across groups were revealed with the interval bisection task. These results are the first to suggest that decreased impulsivity from exposure to natural as opposed to built
Clevis, Krien; Hagoort, Peter
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540
Willems, Roel M; Clevis, Krien; Hagoort, Peter
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng
Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0–11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance. PMID:26441740
Trousselard, Marion; Cian, Corinne; Nougier, Vincent; Pla, Simon; Raphel, Christian
Without relevant visual cues, the subjective visual vertical (SVV) is biased in roll-tilted subjects toward the body axis (Aubert or A-effect). This study focused on the role of the somatosensory system with respect to the SVV and on whether somesthetic cues act through the estimated body tilt. The body cast technology was used to obtain a diffuse tactile stimulation. An increased A-effect was expected because of a greater underestimation of the body position in the body cast. Sixteen subjects placed in a tilt chair were rolled sideways from 0 degrees to 105 degrees. They were asked to verbally indicate their subjective body position and then to adjust a luminous line to the vertical under strapped and body cast conditions. Results showed a greater A-effect (p < .001) but an overestimation of the body orientation (p < .01) in the body cast condition for the higher tilt values (beyond 60 degrees). Since the otolith organs produced the same gravity response in both conditions, errors were due to a change in somesthetic cues. Visual and postural errors were not directly related (no correlation). However, the angular distance between the apparent body position and the SW remained stable, suggesting that the change in somatosensory pattern inputs has a similar impact on the cognitive processes involved in assessing the perception of external space and the sense of self-position.
This study examined the perceptual dimensions of visual material properties. Photographs of 50 objects were presented to the participants, and they reported a suitable onomatopoeia (mimetic word) for describing the material of the object in each photograph, based on visual appearance. The participants' responses were collated into a contingency table of photographs × onomatopoeias. After removing some items from the table, correspondence analysis was applied to the contingency table, and a six-dimensional biplot was obtained. By rotating the axes to maximize sparseness of the coordinates for the items in the biplot, three meaningful perceptual dimensions were derived: wetness/stickiness, fluffiness/softness, and smoothness-roughness/gloss-dullness. Two additional possible dimensions were obtained: crumbliness and coldness. These dimensions, except gloss-dullness, were paid little attention to in vision science, though they were suggested as perceptual dimensions of tactile texture. This suggests that the perceptual dimensions that are considered to be primarily related to haptics are also important in visual material perception.
Godard, Ornella; Baudouin, Jean-Yves; Bonnet, Philippe; Fiori, Nicole
We investigated the psychophysical factors underlying the identity-emotion interaction in face perception. Visual field and sex were also taken into account. Participants had to judge whether a probe face, presented in either the left or the right visual field, and a central target face belonging to same person while emotional expression varied (Experiment 1) or to judge whether probe and target faces expressed the same emotion while identity was manipulated (Experiment 2). For accuracy we replicated the mutual facilitation effect between identity and emotion; no sex or hemispheric differences were found. Processing speed measurements, however, showed a lesser degree of interference in women than in men, especially for matching identity when faces expressed different emotions after a left visual presentation probe face. Psychophysical indices can be used to determine whether these effects are perceptual (A') or instead arise at a post-perceptual decision-making stage (B"). The influence of identity on the processing of facial emotion seems to be due to perceptual factors, whereas the influence of emotion changes on identity processing seems to be related to decisional factors. In addition, men seem to be more "conservative" after a LVF/RH probe-face presentation when processing identity. Women seem to benefit from better abilities to extract facial invariant aspects relative to identity.
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829
Sklenicka, Petr; Molnarova, Kristina
The study presented here focuses on visual preferences expressed by respondents for five relatively natural habitat types used in land reclamation projects in the North-West Bohemian brown coal basins (Czech Republic). Respondents evaluated the perceived beauty of the habitat types using a photograph questionnaire, on the basis of the positively skewed 6-point Likert scale. The order of the habitat types, from most beautiful to least beautiful, was: managed coniferous forest, wild deciduous forest, managed deciduous forest, managed mixed forest, and managed grassland. Higher visual preferences were indicated for older forest habitats (30-40 years old) than for younger habitats (10-20 years old). In addition, respondents preferred wild deciduous forest to managed deciduous forest. Managed grasslands and non-native managed coniferous forests were preferred by older people with a lower level of education and low income living in the post-mining area. On the other hand, native, wild deciduous forest was awarded the highest perceived beauty score by younger, more educated respondents with higher income, living outside the post-mining landscapes. The study confirms differences in the perception of various forms of land reclamation by residents vs. non-residents, and its findings also confirm the need for sociological research in post-mining landscapes within the process of designing rehabilitated landscapes. From the visual standpoint, the results of our study also support the current trend toward using natural succession in the reclamation of post-mining landscapes.
Sklenicka, Petr; Molnarova, Kristina
The study presented here focuses on visual preferences expressed by respondents for five relatively natural habitat types used in land reclamation projects in the North-West Bohemian brown coal basins (Czech Republic). Respondents evaluated the perceived beauty of the habitat types using a photograph questionnaire, on the basis of the positively skewed 6-point Likert scale. The order of the habitat types, from most beautiful to least beautiful, was: managed coniferous forest, wild deciduous forest, managed deciduous forest, managed mixed forest, and managed grassland. Higher visual preferences were indicated for older forest habitats (30-40 years old) than for younger habitats (10-20 years old). In addition, respondents preferred wild deciduous forest to managed deciduous forest. Managed grasslands and non-native managed coniferous forests were preferred by older people with a lower level of education and low income living in the post-mining area. On the other hand, native, wild deciduous forest was awarded the highest perceived beauty score by younger, more educated respondents with higher income, living outside the post-mining landscapes. The study confirms differences in the perception of various forms of land reclamation by residents vs. non-residents, and its findings also confirm the need for sociological research in post-mining landscapes within the process of designing rehabilitated landscapes. From the visual standpoint, the results of our study also support the current trend toward using natural succession in the reclamation of post-mining landscapes.
Andreou, Christina; Bozikas, Vasilis P.; Luedtke, Thies; Moritz, Steffen
Delusions are defined as fixed erroneous beliefs that are based on misinterpretation of events or perception, and cannot be corrected by argumentation to the opposite. Cognitive theories of delusions regard this symptom as resulting from specific distorted thinking styles that lead to biased integration and interpretation of perceived stimuli (i.e., reasoning biases). In previous studies, we were able to show that one of these reasoning biases, overconfidence in errors, can be modulated by drugs that act on the dopamine system, a major neurotransmitter system implicated in the pathogenesis of delusions and other psychotic symptoms. Another processing domain suggested to involve the dopamine system and to be abnormal in psychotic disorders is sensory perception. The present study aimed to investigate whether (lower-order) sensory perception and (higher-order) overconfidence in errors are similarly affected by dopaminergic modulation in healthy subjects. Thirty-four healthy individuals were assessed upon administration of l-dopa, placebo, or haloperidol within a randomized, double-blind, cross-over design. Variables of interest were hits and false alarms in an illusory perception paradigm requiring speeded detection of pictures over a noisy background, and subjective confidence ratings for correct and incorrect responses. There was a significant linear increase of false alarm rates from haloperidol to placebo to l-dopa, whereas hit rates were not affected by dopaminergic manipulation. As hypothesized, confidence in error responses was significantly higher with l-dopa compared to placebo. Moreover, confidence in erroneous responses significantly correlated with false alarm rates. These findings suggest that overconfidence in errors and aberrant sensory processing might be both interdependent and related to dopaminergic transmission abnormalities in patients with psychosis. PMID:25932015
Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P
Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.
Pogosyan, Marianna; Engelmann, Jan B.
Cultural differences in the perception of positive affect intensity within an advertising context were investigated among American, Japanese, and Russian participants. Participants were asked to rate the intensity of facial expressions of positive emotions, which displayed either subtle, low intensity, or salient, high intensity expressions of positive affect. In agreement with previous findings from cross-cultural psychological research, current results demonstrate both cross-cultural agreement and differences in the perception of positive affect intensity across the three cultures. Specifically, American participants perceived high arousal (HA) images as significantly less calm than participants from the other two cultures, while the Japanese participants perceived low arousal (LA) images as significantly more excited than participants from the other cultures. The underlying mechanisms of these cultural differences were further investigated through difference scores that probed for cultural differences in perception and categorization of positive emotions. Findings indicate that rating differences are due to (1) perceptual differences in the extent to which HA images were discriminated from LA images, and (2) categorization differences in the extent to which facial expressions were grouped into affect intensity categories. Specifically, American participants revealed significantly higher perceptual differentiation between arousal levels of facial expressions in high and intermediate intensity categories. Japanese participants, on the other hand, did not discriminate between high and low arousal affect categories to the same extent as did the American and Russian participants. These findings indicate the presence of cultural differences in underlying decoding mechanisms of facial expressions of positive affect intensity. Implications of these results for global advertising are discussed. PMID:22084635
Sinke, Christopher; Neufeld, Janina; Wiswede, Daniel; Emrich, Hinderk M.; Bleich, Stefan; Münte, Thomas F.; Szycik, Gregor R.
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio–visual processing in synesthesia using a semantic classification task in combination with visually or auditory–visually presented animated and in animated objects in an audio–visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes. PMID:24523689
Sinke, Christopher; Neufeld, Janina; Wiswede, Daniel; Emrich, Hinderk M; Bleich, Stefan; Münte, Thomas F; Szycik, Gregor R
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Robinson, Michael D; Moeller, Sara K; Buchholz, Maria M; Boyd, Ryan L; Troop-Gordon, Wendy
Individuals attuned to affective signals from the environment may possess an advantage in the emotion-regulation realm. In two studies (total n = 151), individual differences in affective perception accuracy were assessed in an objective, performance-based manner. Subsequently, the same individuals completed daily diary protocols in which daily stressor levels were reported as well as problematic states shown to be stress-reactive in previous studies. In both studies, individual differences in affect perception accuracy interacted with daily stressor levels to predict the problematic outcomes. Daily stressors precipitated problematic reactions--whether depressive feelings (study 1) or somatic symptoms (study 2)--at low levels of affect perception accuracy, but did not do so at high levels of affect perception accuracy. The findings support a regulatory view of such perceptual abilities. Implications for understanding emotion regulation processes, emotional intelligence, and individual differences in reactivity are discussed.
Cyriac, Praveen; Bertalmio, Marcelo; Kane, David; Vazquez-Corral, Javier
High dynamic range imaging techniques involve capturing and storing real world radiance values that span many orders of magnitude. However, common display devices can usually reproduce intensity ranges only up to two to three orders of magnitude. Therefore, in order to display a high dynamic range image on a low dynamic range screen, the dynamic range of the image needs to be compressed without losing details or introducing artefacts, and this process is called tone mapping. A good tone mapping operator must be able to produce a low dynamic range image that matches as much as possible the perception of the real world scene. We propose a two stage tone mapping approach, in which the first stage is a global method for range compression based on a gamma curve that equalizes the lightness histogram the best, and the second stage performs local contrast enhancement and color induction using neural activity models for the visual cortex.