Sample records for visual information affects

  1. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  2. When seeing outweighs feeling: a role for prefrontal cortex in passive control of negative affect in blindsight.

    PubMed

    Anders, Silke; Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk

    2009-11-01

    Affective neuroscience has been strongly influenced by the view that a 'feeling' is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients' response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients' phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity.

  3. When seeing outweighs feeling: a role for prefrontal cortex in passive control of negative affect in blindsight

    PubMed Central

    Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk

    2009-01-01

    Affective neuroscience has been strongly influenced by the view that a ‘feeling’ is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients’ response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients’ phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity. PMID:19767414

  4. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  5. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  6. The shaping of information by visual metaphors.

    PubMed

    Ziemkiewicz, Caroline; Kosara, Robert

    2008-01-01

    The nature of an information visualization can be considered to lie in the visual metaphors it uses to structure information. The process of understanding a visualization therefore involves an interaction between these external visual metaphors and the user's internal knowledge representations. To investigate this claim, we conducted an experiment to test the effects of visual metaphor and verbal metaphor on the understanding of tree visualizations. Participants answered simple data comprehension questions while viewing either a treemap or a node-link diagram. Questions were worded to reflect a verbal metaphor that was either compatible or incompatible with the visualization a participant was using. The results (based on correctness and response time) suggest that the visual metaphor indeed affects how a user derives information from a visualization. Additionally, we found that the degree to which a user is affected by the metaphor is strongly correlated with the user's ability to answer task questions correctly. These findings are a first step towards illuminating how visual metaphors shape user understanding, and have significant implications for the evaluation, application, and theory of visualization.

  7. Visual Predictions in the Orbitofrontal Cortex Rely on Associative Content

    PubMed Central

    Chaumon, Maximilien; Kveraga, Kestutis; Barrett, Lisa Feldman; Bar, Moshe

    2014-01-01

    Predicting upcoming events from incomplete information is an essential brain function. The orbitofrontal cortex (OFC) plays a critical role in this process by facilitating recognition of sensory inputs via predictive feedback to sensory cortices. In the visual domain, the OFC is engaged by low spatial frequency (LSF) and magnocellular-biased inputs, but beyond this, we know little about the information content required to activate it. Is the OFC automatically engaged to analyze any LSF information for meaning? Or is it engaged only when LSF information matches preexisting memory associations? We tested these hypotheses and show that only LSF information that could be linked to memory associations engages the OFC. Specifically, LSF stimuli activated the OFC in 2 distinct medial and lateral regions only if they resembled known visual objects. More identifiable objects increased activity in the medial OFC, known for its function in affective responses. Furthermore, these objects also increased the connectivity of the lateral OFC with the ventral visual cortex, a crucial region for object identification. At the interface between sensory, memory, and affective processing, the OFC thus appears to be attuned to the associative content of visual information and to play a central role in visuo-affective prediction. PMID:23771980

  8. Improving Scores on Computerized Reading Assessments: The Effects of Colored Overlay Use

    ERIC Educational Resources Information Center

    Adams, Tracy A.

    2012-01-01

    Visual stress is a perceptual dysfunction that appears to affect how information is processed as it passes from the eyes to the brain. Photophobia, visual resolution, restricted focus, sustaining focus, and depth perception are all components of visual stress. Because visual stress affects what is perceived by the eye, students with this disorder…

  9. Perception of Elementary Students of Visuals on the Web.

    ERIC Educational Resources Information Center

    El-Tigi, Manal A.; And Others

    The way information is visually designed and synthesized greatly affects how people understand and use that information. Increased use of the World Wide Web as a teaching tool makes it imperative to question how visual/verbal information presented via the Web can increase or restrict understanding. The purpose of this study was to examine…

  10. Access to Awareness for Faces during Continuous Flash Suppression Is Not Modulated by Affective Knowledge

    PubMed Central

    Rabovsky, Milena; Stein, Timo; Abdel Rahman, Rasha

    2016-01-01

    It is a controversially debated topic whether stimuli can be analyzed up to the semantic level when they are suppressed from visual awareness during continuous flash suppression (CFS). Here, we investigated whether affective knowledge, i.e., affective biographical information about faces, influences the time it takes for initially invisible faces with neutral expressions to overcome suppression and break into consciousness. To test this, we used negative, positive, and neutral famous faces as well as initially unfamiliar faces, which were associated with negative, positive or neutral biographical information. Affective knowledge influenced ratings of facial expressions, corroborating recent evidence and indicating the success of our affective learning paradigm. Furthermore, we replicated shorter suppression durations for upright than for inverted faces, demonstrating the suitability of our CFS paradigm. However, affective biographical information did not modulate suppression durations for newly learned faces, and even though suppression durations for famous faces were influenced by affective knowledge, these effects did not differ between upright and inverted faces, indicating that they might have been due to low-level visual differences. Thus, we did not obtain unequivocal evidence for genuine influences of affective biographical information on access to visual awareness for faces during CFS. PMID:27119743

  11. Does Seeing Ice Really Feel Cold? Visual-Thermal Interaction under an Illusory Body-Ownership

    PubMed Central

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed. PMID:23144814

  12. Does seeing ice really feel cold? Visual-thermal interaction under an illusory body-ownership.

    PubMed

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed.

  13. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    PubMed

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  14. Emotion and Perception: The Role of Affective Information

    PubMed Central

    Zadra, Jonathan R.; Clore, Gerald L.

    2011-01-01

    Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565

  15. Evolutionary relevance facilitates visual information processing.

    PubMed

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  16. Contribution of amygdalar and lateral hypothalamic neurons to visual information processing of food and nonfood in monkey.

    PubMed

    Ono, T; Tamura, R; Nishijo, H; Nakamura, K; Tabuchi, E

    1989-02-01

    Visual information processing was investigated in the inferotemporal cortical (ITCx)-amygdalar (AM)-lateral hypothalamic (LHA) axis which contributes to food-nonfood discrimination. Neuronal activity was recorded from monkey AM and LHA during discrimination of sensory stimuli including sight of food or nonfood. The task had four phases: control, visual, bar press, and ingestion. Of 710 AM neurons tested, 220 (31.0%) responded during visual phase: 48 to only visual stimulation, 13 (1.9%) to visual plus oral sensory stimulation, 142 (20.0%) to multimodal stimulation and 17 (2.4%) to one affectively significant item. Of 669 LHA neurons tested, 106 (15.8%) responded in the visual phase. Of 80 visual-related neurons tested systematically, 33 (41.2%) responded selectively to the sight of any object predicting the availability of reward, and 47 (58.8%) responded nondifferentially to both food and nonfood. Many of AM neuron responses were graded according to the degree of affective significance of sensory stimuli (sensory-affective association), but responses of LHA food responsive neurons did not depend on the kind of reward indicated by the sensory stimuli (stimulus-reinforcement association). Some AM and LHA food responses were modulated by extinction or reversal. Dynamic information processing in ITCx-AM-LHA axis was investigated by reversible deficits of bilateral ITCx or AM by cooling. ITCx cooling suppressed discrimination by vision responding AM neurons (8/17). AM cooling suppressed LHA responses to food (9/22). We suggest deep AM-LHA involvement in food-nonfood discrimination based on AM sensory-affective association and LHA stimulus-reinforcement association.

  17. Reduced sensitivity for visual textures affects judgments of shape-from-shading and step-climbing behaviour in older adults.

    PubMed

    Schofield, Andrew J; Curzon-Jones, Benjamin; Hollands, Mark A

    2017-02-01

    Falls on stairs are a major hazard for older adults. Visual decline in normal ageing can affect step-climbing ability, altering gait and reducing toe clearance. Here we show that a loss of fine-grained visual information associated with age can affect the perception of surface undulations in patterned surfaces. We go on to show that such cues affect the limb trajectories of young adults, but due to their lack of sensitivity, not that of older adults. Interestingly neither the perceived height of a step nor conscious awareness is altered by our visual manipulation, but stepping behaviour is, suggesting that the influence of shape perception on stepping behaviour is via the unconscious, action-centred, dorsal visual pathway.

  18. The role of vision in auditory distance perception.

    PubMed

    Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro

    2012-01-01

    In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.

  19. Combined visual illusion effects on the perceived index of difficulty and movement outcomes in discrete and continuous fitts' tapping.

    PubMed

    Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin

    2016-01-01

    The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.

  20. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  1. The influence of visual speech information on the intelligibility of English consonants produced by non-native speakers.

    PubMed

    Kawase, Saya; Hannah, Beverly; Wang, Yue

    2014-09-01

    This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.

  2. Social Media Interruption Affects the Acquisition of Visually, Not Aurally, Acquired Information during a Pathophysiology Lecture

    ERIC Educational Resources Information Center

    Marone, Jane R.; Thakkar, Shivam C.; Suliman, Neveen; O'Neill, Shannon I.; Doubleday, Alison F.

    2018-01-01

    Poor academic performance from extensive social media usage appears to be due to students' inability to multitask between distractions and academic work. However, the degree to which visually distracted students can acquire lecture information presented aurally is unknown. This study examined the ability of students visually distracted by social…

  3. Social media interruption affects the acquisition of visually, not aurally, acquired information during a pathophysiology lecture.

    PubMed

    Marone, Jane R; Thakkar, Shivam C; Suliman, Neveen; O'Neill, Shannon I; Doubleday, Alison F

    2018-06-01

    Poor academic performance from extensive social media usage appears to be due to students' inability to multitask between distractions and academic work. However, the degree to which visually distracted students can acquire lecture information presented aurally is unknown. This study examined the ability of students visually distracted by social media to acquire information presented during a voice-over PowerPoint lecture, and to compare performance on examination questions derived from information presented aurally vs. that presented visually. Students ( n = 20) listened to a 42-min cardiovascular pathophysiology lecture containing embedded cartoons while taking notes. The experimental group ( n = 10) was visually, but not aurally, distracted by social media during times when cartoon information was presented, ~40% of total lecture time. Overall performance among distracted students on a follow-up, open-note quiz was 30% poorer than that for controls ( P < 0.001). When the modality of presentation (visual vs. aural) was compared, performance decreased on examination questions from information presented visually. However, performance on questions from information presented aurally was similar to that of controls. Our findings suggest the ability to acquire information during lecture may vary, depending on the degree of competition between the modalities of the distraction and the lecture presentation. Within the context of current literature, our findings also suggest that timing of the distraction relative to delivery of material examined affects performance more than total distraction time. Therefore, when delivering lectures, instructors should incorporate organizational cues and active learning strategies that assist students in maintaining focus and acquiring relevant information.

  4. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    PubMed Central

    Wilson, Amanda H.; Paré, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. Results In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. Conclusions The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze. PMID:27537379

  5. Strongly-motivated positive affects induce faster responses to local than global information of visual stimuli: an approach using large-size Navon letters.

    PubMed

    Noguchi, Yasuki; Tomoike, Kouta

    2016-01-12

    Recent studies argue that strongly-motivated positive emotions (e.g. desire) narrow a scope of attention. This argument is mainly based on an observation that, while humans normally respond faster to global than local information of a visual stimulus (global advantage), positive affects eliminated the global advantage by selectively speeding responses to local (but not global) information. In other words, narrowing of attentional scope was indirectly evidenced by the elimination of global advantage (the same speed of processing between global and local information). No study has directly shown that strongly-motivated positive affects induce faster responses to local than global information while excluding a bias for global information (global advantage) in a baseline (emotionally-neutral) condition. In the present study, we addressed this issue by eliminating the global advantage in a baseline (neutral) state. Induction of positive affects under this state resulted in faster responses to local than global information. Our results provided direct evidence that positive affects in high motivational intensity narrow a scope of attention.

  6. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  7. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  8. How Information Visualization Systems Change Users' Understandings of Complex Data

    ERIC Educational Resources Information Center

    Allendoerfer, Kenneth Robert

    2009-01-01

    User-centered evaluations of information systems often focus on the usability of the system rather its usefulness. This study examined how a using an interactive knowledge-domain visualization (KDV) system affected users' understanding of a domain. Interactive KDVs allow users to create graphical representations of domains that depict important…

  9. How Visual Displays Affect Cognitive Processing

    ERIC Educational Resources Information Center

    McCrudden, Matthew T.; Rapp, David N.

    2017-01-01

    We regularly consult and construct visual displays that are intended to communicate important information. The power of these displays and the instructional messages we attempt to comprehend when using them emerge from the information included in the display and by their spatial arrangement. In this article, we identify common types of visual…

  10. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    PubMed Central

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  11. Cogito ergo video: Task-relevant information is involuntarily boosted into awareness.

    PubMed

    Gayet, Surya; Brascamp, Jan W; Van der Stigchel, Stefan; Paffen, Chris L E

    2015-01-01

    Only part of the visual information that impinges on our retinae reaches visual awareness. In a series of three experiments, we investigated how the task relevance of incoming visual information affects its access to visual awareness. On each trial, participants were instructed to memorize one of two presented hues, drawn from different color categories (e.g., red and green), for later recall. During the retention interval, participants were presented with a differently colored grating in each eye such as to elicit binocular rivalry. A grating matched either the task-relevant (memorized) color category or the task-irrelevant (nonmemorized) color category. We found that the rivalrous stimulus that matched the task-relevant color category tended to dominate awareness over the rivalrous stimulus that matched the task-irrelevant color category. This effect of task relevance persisted when participants reported the orientation of the rivalrous stimuli, even though in this case color information was completely irrelevant for the task of reporting perceptual dominance during rivalry. When participants memorized the shape of a colored stimulus, however, its color category did not affect predominance of rivalrous stimuli during retention. Taken together, these results indicate that the selection of task-relevant information is under volitional control but that visual input that matches this information is boosted into awareness irrespective of whether this is useful for the observer.

  12. Sensory Mode and "Information Load": Examining the Effects of Timing on Multisensory Processing.

    ERIC Educational Resources Information Center

    Tiene, Drew

    2000-01-01

    Discussion of the development of instructional multimedia materials focuses on a study of undergraduates that examined how the use of visual icons affected learning, differences in the instructional effectiveness of visual versus auditory processing of the same information, and timing (whether simultaneous or sequential presentation is more…

  13. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  14. Influence of time restriction, 20 minutes and 94.6 months, of visual information on angular displacement during the sit-to-stand (STS) task in three planes.

    PubMed

    Aylar, Mozhgan Faraji; Firouzi, Faramarz; Araghi, Mandana Rahnama

    2016-12-01

    [Purpose] The purpose of this investigation was to assess whether or not restriction of visual information influences the kinematics of sit-to-stand (STS) performance in children. [Subjects and Methods] Five girls with congenital blindness (CB) and ten healthy girls with no visual impairments were randomly selected. The girls with congenital blindness were placed in one group and the ten girls with no visual impairments were divided into two groups of five, control and treatment groups. The participants in the treatment group were asked to close their eyes (EC) for 20 minutes before the STS test, whereas those in the control group kept their eyes open (EO). The performance of the participants in all three groups was measured using a motion capture system and two force plates. [Results] The results show that the constraint duration of visual sensory information affected the range of motion (ROM), the excursion of the dominant side ankle, and the ROM of the dominant side knee in the EC group. However, only ankle excursion on the non-dominant side was affected in the CB group, and this was only observed in the sagittal plane. [Conclusion] These results indicate that visual memory does not affect the joint angles in the frontal and transverse planes. Moreover, all of the participants could perform the STS transition without falling, indicating; the participants performed the STS maneuver correctly in all planes except the sagittal one.

  15. Attention affects visual perceptual processing near the hand.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2010-09-01

    Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.

  16. A Graph Based Interface for Representing Volume Visualization Results

    NASA Technical Reports Server (NTRS)

    Patten, James M.; Ma, Kwan-Liu

    1998-01-01

    This paper discusses a graph based user interface for representing the results of the volume visualization process. As images are rendered, they are connected to other images in a graph based on their rendering parameters. The user can take advantage of the information in this graph to understand how certain rendering parameter changes affect a dataset, making the visualization process more efficient. Because the graph contains more information than is contained in an unstructured history of images, the image graph is also helpful for collaborative visualization and animation.

  17. The effect of four user interface concepts on visual scan pattern similarity and information foraging in a complex decision making task.

    PubMed

    Starke, Sandra D; Baber, Chris

    2018-07-01

    User interface (UI) design can affect the quality of decision making, where decisions based on digitally presented content are commonly informed by visually sampling information through eye movements. Analysis of the resulting scan patterns - the order in which people visually attend to different regions of interest (ROIs) - gives an insight into information foraging strategies. In this study, we quantified scan pattern characteristics for participants engaging with conceptually different user interface designs. Four interfaces were modified along two dimensions relating to effort in accessing information: data presentation (either alpha-numerical data or colour blocks), and information access time (all information sources readily available or sequential revealing of information required). The aim of the study was to investigate whether a) people develop repeatable scan patterns and b) different UI concepts affect information foraging and task performance. Thirty-two participants (eight for each UI concept) were given the task to correctly classify 100 credit card transactions as normal or fraudulent based on nine transaction attributes. Attributes varied in their usefulness of predicting the correct outcome. Conventional and more recent (network analysis- and bioinformatics-based) eye tracking metrics were used to quantify visual search. Empirical findings were evaluated in context of random data and possible accuracy for theoretical decision making strategies. Results showed short repeating sequence fragments within longer scan patterns across participants and conditions, comprising a systematic and a random search component. The UI design concept showing alpha-numerical data in full view resulted in most complete data foraging, while the design concept showing colour blocks in full view resulted in the fastest task completion time. Decision accuracy was not significantly affected by UI design. Theoretical calculations showed that the difference in achievable accuracy between very complex and simple decision making strategies was small. We conclude that goal-directed search of familiar information results in repeatable scan pattern fragments (often corresponding to information sources considered particularly important), but no repeatable complete scan pattern. The underlying concept of the UI affects how visual search is performed, and a decision making strategy develops. This should be taken in consideration when designing for applied domains. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  19. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.

    PubMed

    Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin

    2017-01-18

    Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.

  20. Preprocessing of emotional visual information in the human piriform cortex.

    PubMed

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  1. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  2. Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence.

    PubMed

    Petrini, Karin; McAleer, Phil; Pollick, Frank

    2010-04-06

    In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Visually guided locomotion and computation of time-to-collision in the mongolian gerbil (Meriones unguiculatus): the effects of frontal and visual cortical lesions.

    PubMed

    Shankar, S; Ellard, C

    2000-02-01

    Past research has indicated that many species use the time-to-collision variable but little is known about its neural underpinnings in rodents. In a set of three experiments we set out to replicate and extend the findings of Sun et al. (Sun H-J, Carey DP, Goodale MA. Exp Brain Res 1992;91:171-175) in a visually guided task in Mongolian gerbils, and then investigated the effects of lesions to different cortical areas. We trained Mongolian gerbils to run in the dark toward a target on a computer screen. In some trials the target changed in size as the animal ran toward it in such a way as to produce 'virtual targets' if the animals were using time-to-collision or contact information. In experiment 1 we confirmed that gerbils use time-to-contact information to modulate their speed of running toward a target. In experiment 2 we established that visual cortex lesions attenuate the ability of lesioned animals to use information from the visual target to guide their run, while frontal cortex lesioned animals are not as severely affected. In experiment 3 we found that small radio-frequency lesions, of either area VI or of the lateral extrastriate regions of the visual cortex also affected the use of information from the target to modulate locomotion.

  4. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    PubMed

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  5. Object representations in visual memory: evidence from visual illusions.

    PubMed

    Ben-Shalom, Asaf; Ganel, Tzvi

    2012-07-26

    Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.

  6. Simulated visual field loss does not alter turning coordination in healthy young adults.

    PubMed

    Murray, Nicholas G; Ponce de Leon, Marlina; Ambati, V N Pradeep; Saucedo, Fabricio; Kennedy, Evan; Reed-Jones, Rebecca J

    2014-01-01

    Turning, while walking, is an important component of adaptive locomotion. Current hypotheses regarding the motor control of body segment coordination during turning suggest heavy influence of visual information. The authors aimed to examine whether visual field impairment (central loss or peripheral loss) affects body segment coordination during walking turns in healthy young adults. No significant differences in the onset time of segments or intersegment coordination were observed because of visual field occlusion. These results suggest that healthy young adults can use visual information obtained from central and peripheral visual fields interchangeably, pointing to flexibility of visuomotor control in healthy young adults. Further study in populations with chronic visual impairment and those with turning difficulties are warranted.

  7. Attention Increases Spike Count Correlations between Visual Cortical Areas.

    PubMed

    Ruff, Douglas A; Cohen, Marlene R

    2016-07-13

    Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. Copyright © 2016 the authors 0270-6474/16/367523-12$15.00/0.

  8. Attention Increases Spike Count Correlations between Visual Cortical Areas

    PubMed Central

    Cohen, Marlene R.

    2016-01-01

    Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys performed an attention task. We found that attention increased shared variability between neurons in the two areas and that attention increased the effect of microstimulation in V1 on the firing rates of MT neurons. Our results provide support for the hypothesis that attention increases communication between neurons in different brain areas on behaviorally relevant timescales. PMID:27413161

  9. Dynamic visual noise reduces confidence in short-term memory for visual information.

    PubMed

    Kemps, Eva; Andrade, Jackie

    2012-05-01

    Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

  10. Visual Working Memory Capacity for Objects from Different Categories: A Face-Specific Maintenance Effect

    ERIC Educational Resources Information Center

    Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.

    2008-01-01

    The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…

  11. Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.

    PubMed

    Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro

    2017-08-01

    Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.

  12. Effects of emotional tone and visual complexity on processing health information in prescription drug advertising.

    PubMed

    Norris, Rebecca L; Bailey, Rachel L; Bolls, Paul D; Wise, Kevin R

    2012-01-01

    This experiment explored how the emotional tone and visual complexity of direct-to-consumer (DTC) drug advertisements affect the encoding and storage of specific risk and benefit statements about each of the drugs in question. Results are interpreted under the limited capacity model of motivated mediated message processing framework. Findings suggest that DTC drug ads should be pleasantly toned and high in visual complexity in order to maximize encoding and storage of risk and benefit information.

  13. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity.

    PubMed

    Baird, Emily; Fernandez, Diana C; Wcislo, William T; Warrant, Eric J

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion-a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus.

  14. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity

    PubMed Central

    Baird, Emily; Fernandez, Diana C.; Wcislo, William T.; Warrant, Eric J.

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion—a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  15. Photo-sharing social media for eHealth: analysing perceived message effectiveness of sexual health information on Instagram.

    PubMed

    O'Donnell, Nicole Hummel; Willoughby, Jessica Fitts

    2017-10-01

    Health professionals increasingly use social media to communicate health information, but it is unknown how visual message presentation on these platforms affects message reception. This study used an experiment to analyse how young adults (n = 839) perceive sexual health messages on Instagram. Participants were exposed to one of four conditions based on visual message presentation. Messages with embedded health content had the highest perceived message effectiveness ratings. Additionally, message sensation value, attitudes and systematic information processing were significant predictors of perceived message effectiveness. Implications for visual message design for electronic health are discussed.

  16. Visual/motion cue mismatch in a coordinated roll maneuver

    NASA Technical Reports Server (NTRS)

    Shirachi, D. K.; Shirley, R. S.

    1981-01-01

    The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.

  17. Visual information underpinning skilled anticipation: The effect of blur on a coupled and uncoupled in situ anticipatory response.

    PubMed

    Mann, David L; Abernethy, Bruce; Farrow, Damian

    2010-07-01

    Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.

  18. Speakers of different languages process the visual world differently.

    PubMed

    Chabal, Sarah; Marian, Viorica

    2015-06-01

    Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (c) 2015 APA, all rights reserved).

  19. Linking Cognitive and Visual Perceptual Decline in Healthy Aging: The Information Degradation Hypothesis

    PubMed Central

    Monge, Zachary A.; Madden, David J.

    2016-01-01

    Several hypotheses attempt to explain the relation between cognitive and perceptual decline in aging (e.g., common-cause, sensory deprivation, cognitive load on perception, information degradation). Unfortunately, the majority of past studies examining this association have used correlational analyses, not allowing for these hypotheses to be tested sufficiently. This correlational issue is especially relevant for the information degradation hypothesis, which states that degraded perceptual signal inputs, resulting from either age-related neurobiological processes (e.g., retinal degeneration) or experimental manipulations (e.g., reduced visual contrast), lead to errors in perceptual processing, which in turn may affect non-perceptual, higher-order cognitive processes. Even though the majority of studies examining the relation between age-related cognitive and perceptual decline have been correlational, we reviewed several studies demonstrating that visual manipulations affect both younger and older adults’ cognitive performance, supporting the information degradation hypothesis and contradicting implications of other hypotheses (e.g., common-cause, sensory deprivation, cognitive load on perception). The reviewed evidence indicates the necessity to further examine the information degradation hypothesis in order to identify mechanisms underlying age-related cognitive decline. PMID:27484869

  20. Guidance of visual attention by semantic information in real-world scenes

    PubMed Central

    Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc

    2014-01-01

    Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724

  1. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  2. Early visual ERPs are influenced by individual emotional skills

    PubMed Central

    Roux, Sylvie; Batty, Magali

    2014-01-01

    Processing information from faces is crucial to understanding others and to adapting to social life. Many studies have investigated responses to facial emotions to provide a better understanding of the processes and the neural networks involved. Moreover, several studies have revealed abnormalities of emotional face processing and their neural correlates in affective disorders. The aim of this study was to investigate whether early visual event-related potentials (ERPs) are affected by the emotional skills of healthy adults. Unfamiliar faces expressing the six basic emotions were presented to 28 young adults while recording visual ERPs. No specific task was required during the recording. Participants also completed the Social Skills Inventory (SSI) which measures social and emotional skills. The results confirmed that early visual ERPs (P1, N170) are affected by the emotions expressed by a face and also demonstrated that N170 and P2 are correlated to the emotional skills of healthy subjects. While N170 is sensitive to the subject’s emotional sensitivity and expressivity, P2 is modulated by the ability of the subjects to control their emotions. We therefore suggest that N170 and P2 could be used as individual markers to assess strengths and weaknesses in emotional areas and could provide information for further investigations of affective disorders. PMID:23720573

  3. Mate choice in the eye and ear of the beholder? Female multimodal sensory configuration influences her preferences.

    PubMed

    Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R

    2018-05-16

    A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).

  4. High visual resolution matters in audiovisual speech perception, but only for some.

    PubMed

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  5. Semantic integration of differently asynchronous audio-visual information in videos of real-world events in cognitive processing: an ERP study.

    PubMed

    Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang

    2011-07-01

    In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. Effects of Visual Information on Wind-Evoked Escape Behavior of the Cricket, Gryllus bimaculatus.

    PubMed

    Kanou, Masamichi; Matsuyama, Akane; Takuwa, Hiroyuki

    2014-09-01

    We investigated the effects of visual information on wind-evoked escape behavior in the cricket, Gryllus bimaculatus. Most agitated crickets were found to retreat into a shelter made of cardboard installed in the test arena within a short time. As this behavior was thought to be a type of escape, we confirmed how a visual image of a shelter affected wind-evoked escape behavior. Irrespective of the brightness of the visual background (black or white) or the absence or presence of a shelter, escape jumps were oriented almost 180° opposite to the source of the air puff stimulus. Therefore, the direction of wind-evoked escape depends solely depended on the direction of the stimulus air puff. In contrast, the turning direction of the crickets during the escape was affected by the position of the visual image of the shelter. During the wind-evoked escape jump, most crickets turned in the direction in which a shelter was presented. This behavioral nature is presumably necessary for crickets to retreat into a shelter within a short time after their escape jump.

  7. Effects of spatial frequency and location of fearful faces on human amygdala activity.

    PubMed

    Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter

    2011-01-31

    Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Attachment affects social information processing: Specific electrophysiological effects of maternal stimuli.

    PubMed

    Wu, Lili; Gu, Ruolei; Zhang, Jianxin

    2016-01-01

    Attachment is critical to each individual. It affects the cognitive-affective processing of social information. The present study examines how attachment affects the processing of social information, specifically maternal information. We assessed the behavioral and electrophysiological responses to maternal information (compared to non-specific others) in a Go/No-go Association Task (GNAT) with 22 participants. The results illustrated that attachment affected maternal information processing during three sequential stages of information processing. First, attachment affected visual perception, reflected by enhanced P100 and N170 elicited by maternal information as compared to others information. Second, compared to others, mother obtained more attentional resources, reflected by faster behavioral response to maternal information and larger P200 and P300. Finally, mother was evaluated positively, reflected by shorter P300 latency in a mother + good condition as compared to a mother + bad condition. These findings indicated that the processing of attachment-relevant information is neurologically differentiated from other types of social information from an early stage of perceptual processing to late high-level processing.

  9. Anxiety affects the amplitudes of red and green color-elicited flash visual evoked potentials in humans.

    PubMed

    Hosono, Yuki; Kitaoka, Kazuyoshi; Urushihara, Ryo; Séi, Hiroyoshi; Kinouchi, Yohsuke

    2014-01-01

    It has been reported that negative emotional changes and conditions affect the visual faculties of humans at the neural level. On the other hand, the effects of emotion on color perception in particular, which are based on evoked potentials, are unknown. In the present study, we investigated whether different anxiety levels affect the color information processing for each of 3 wavelengths by using flash visual evoked potentials (FVEPs) and State-Trait Anxiety Inventory. In results, significant positive correlations were observed between FVEP amplitudes and state or trait anxiety scores in the long (sensed as red) and middle (sensed as green) wavelengths. On the other hand, short-wavelength-evoked FVEPs were not correlated with anxiety level. Our results suggest that negative emotional conditions may affect color sense processing in humans.

  10. Egocentric Direction and Position Perceptions are Dissociable Based on Only Static Lane Edge Information

    PubMed Central

    Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune

    2015-01-01

    When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895

  11. Changes in Drivers’ Visual Performance during the Collision Avoidance Process as a Function of Different Field of Views at Intersections

    PubMed Central

    Yan, Xuedong; Zhang, Xinran; Zhang, Yuting; Li, Xiaomeng; Yang, Zhuo

    2016-01-01

    The intersection field of view (IFOV) indicates an extent that the visual information can be observed by drivers. It has been found that further enhancing IFOV can significantly improve emergent collision avoidance performance at intersections, such as faster brake reaction time, smaller deceleration rate, and lower traffic crash involvement risk. However, it is not known how IFOV affects drivers’ eye movements, visual attention and the relationship between visual searching and traffic safety. In this study, a driving simulation experiment was conducted to uncover the changes in drivers’ visual performance during the collision avoidance process as a function of different field of views at an intersection by using an eye tracking system. The experimental results showed that drivers’ ability in identifying the potential hazard in terms of visual searching was significantly affected by different IFOV conditions. As the IFOVs increased, drivers had longer gaze duration (GD) and more number of gazes (NG) in the intersection surrounding areas and paid more visual attention to capture critical visual information on the emerging conflict vehicle, thus leading to a better collision avoidance performance and a lower crash risk. It was also found that female drivers had a better visual performance and a lower crash rate than male drivers. From the perspective of drivers’ visual performance, the results strengthened the evidence that further increasing intersection sight distance standards should be encouraged for enhancing traffic safety. PMID:27716824

  12. Semantic congruency and the (reversed) Colavita effect in children and adults.

    PubMed

    Wille, Claudia; Ebersbach, Mirjam

    2016-01-01

    When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Task-relevant perceptual features can define categories in visual memory too.

    PubMed

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  14. Feature-based memory-driven attentional capture: visual working memory content affects visual attention.

    PubMed

    Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan

    2006-10-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.

  15. Early visual ERPs are influenced by individual emotional skills.

    PubMed

    Meaux, Emilie; Roux, Sylvie; Batty, Magali

    2014-08-01

    Processing information from faces is crucial to understanding others and to adapting to social life. Many studies have investigated responses to facial emotions to provide a better understanding of the processes and the neural networks involved. Moreover, several studies have revealed abnormalities of emotional face processing and their neural correlates in affective disorders. The aim of this study was to investigate whether early visual event-related potentials (ERPs) are affected by the emotional skills of healthy adults. Unfamiliar faces expressing the six basic emotions were presented to 28 young adults while recording visual ERPs. No specific task was required during the recording. Participants also completed the Social Skills Inventory (SSI) which measures social and emotional skills. The results confirmed that early visual ERPs (P1, N170) are affected by the emotions expressed by a face and also demonstrated that N170 and P2 are correlated to the emotional skills of healthy subjects. While N170 is sensitive to the subject's emotional sensitivity and expressivity, P2 is modulated by the ability of the subjects to control their emotions. We therefore suggest that N170 and P2 could be used as individual markers to assess strengths and weaknesses in emotional areas and could provide information for further investigations of affective disorders. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  16. Prefrontal contributions to visual selective attention.

    PubMed

    Squire, Ryan F; Noudoost, Behrad; Schafer, Robert J; Moore, Tirin

    2013-07-08

    The faculty of attention endows us with the capacity to process important sensory information selectively while disregarding information that is potentially distracting. Much of our understanding of the neural circuitry underlying this fundamental cognitive function comes from neurophysiological studies within the visual modality. Past evidence suggests that a principal function of the prefrontal cortex (PFC) is selective attention and that this function involves the modulation of sensory signals within posterior cortices. In this review, we discuss recent progress in identifying the specific prefrontal circuits controlling visual attention and its neural correlates within the primate visual system. In addition, we examine the persisting challenge of precisely defining how behavior should be affected when attentional function is lost.

  17. Speakers of Different Languages Process the Visual World Differently

    PubMed Central

    Chabal, Sarah; Marian, Viorica

    2015-01-01

    Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct linguistic input, showing that language is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. PMID:26030171

  18. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  19. Influence of audio triggered emotional attention on video perception

    NASA Astrophysics Data System (ADS)

    Torres, Freddy; Kalva, Hari

    2014-02-01

    Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.

  20. Encoding Modality Can Affect Memory Accuracy via Retrieval Orientation

    ERIC Educational Resources Information Center

    Pierce, Benton H.; Gallo, David A.

    2011-01-01

    Research indicates that false memory is lower following visual than auditory study, potentially because visual information is more distinctive. In the present study we tested the extent to which retrieval orientation can cause a modality effect on memory accuracy. Participants studied unrelated words in different modalities, followed by criterial…

  1. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  2. Incidental learning of probability information is differentially affected by the type of visual working memory representation.

    PubMed

    van Lamsweerde, Amanda E; Beck, Melissa R

    2015-12-01

    In this study, we investigated whether the ability to learn probability information is affected by the type of representation held in visual working memory. Across 4 experiments, participants detected changes to displays of coloured shapes. While participants detected changes in 1 dimension (e.g., colour), a feature from a second, nonchanging dimension (e.g., shape) predicted which object was most likely to change. In Experiments 1 and 3, items could be grouped by similarity in the changing dimension across items (e.g., colours and shapes were repeated in the display), while in Experiments 2 and 4 items could not be grouped by similarity (all features were unique). Probability information from the predictive dimension was learned and used to increase performance, but only when all of the features within a display were unique (Experiments 2 and 4). When it was possible to group by feature similarity in the changing dimension (e.g., 2 blue objects appeared within an array), participants were unable to learn probability information and use it to improve performance (Experiments 1 and 3). The results suggest that probability information can be learned in a dimension that is not explicitly task-relevant, but only when the probability information is represented with the changing dimension in visual working memory. (c) 2015 APA, all rights reserved).

  3. How Therapists Use Visualizations of Upper Limb Movement Information From Stroke Patients: A Qualitative Study With Simulated Information

    PubMed Central

    Fong, Justin; Klaic, Marlena; Nair, Siddharth; Vetere, Frank; Cofré Lizama, L. Eduardo; Galea, Mary Pauline

    2016-01-01

    Background Stroke is a leading cause of disability worldwide, with upper limb deficits affecting an estimated 30% to 60% of survivors. The effectiveness of upper limb rehabilitation relies on numerous factors, particularly patient compliance to home programs and exercises set by therapists. However, therapists lack objective information about their patients’ adherence to rehabilitation exercises as well as other uses of the affected arm and hand in everyday life outside the clinic. We developed a system that consists of wearable sensor technology to monitor a patient’s arm movement and a Web-based dashboard to visualize this information for therapists. Objective The aim of our study was to evaluate how therapists use upper limb movement information visualized on a dashboard to support the rehabilitation process. Methods An interactive dashboard prototype with simulated movement information was created and evaluated through a user-centered design process with therapists (N=8) at a rehabilitation clinic. Data were collected through observations of therapists interacting with an interactive dashboard prototype, think-aloud data, and interviews. Data were analyzed qualitatively through thematic analysis. Results Therapists use visualizations of upper limb information in the following ways: (1) to obtain objective data of patients’ activity levels, exercise, and neglect outside the clinic, (2) to engage patients in the rehabilitation process through education, motivation, and discussion of experiences with activities of daily living, and (3) to engage with other clinicians and researchers based on objective data. A major limitation is the lack of contextual data, which is needed by therapists to discern how movement data visualized on the dashboard relate to activities of daily living. Conclusions Upper limb information captured through wearable devices provides novel insights for therapists and helps to engage patients and other clinicians in therapy. Consideration needs to be given to the collection and visualization of contextual information to provide meaningful insights into patient engagement in activities of daily living. These findings open the door for further work to develop a fully functioning system and to trial it with patients and clinicians during therapy. PMID:28582257

  4. Language-guided visual processing affects reasoning: the role of referential and spatial anchoring.

    PubMed

    Dumitru, Magda L; Joergensen, Gitte H; Cruickshank, Alice G; Altmann, Gerry T M

    2013-06-01

    Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Space-based bias of covert visual attention in complex regional pain syndrome.

    PubMed

    Bultitude, Janet H; Walker, Ian; Spence, Charles

    2017-09-01

    See Legrain (doi:10.1093/awx188) for a scientific commentary on this article. Some patients with complex regional pain syndrome report that movements of the affected limb are slow, more effortful, and lack automaticity. These symptoms have been likened to the syndrome that sometimes follows brain injury called hemispatial neglect, in which patients exhibit attentional impairments and problems with movements affecting the contralesional side of the body and space. Psychophysical testing of patients with complex regional pain syndrome has found evidence for spatial biases when judging visual targets distanced at 2 m, but not in directions that indicate reduced attention to the affected side. In contrast, when judging visual or tactile stimuli presented on their own body surface, or pictures of hands and feet within arm's reach, patients with complex regional pain syndrome exhibited a bias away from the affected side. What is not yet known is whether patients with complex regional pain syndrome only have biased attention for bodily-specific information in the space within arm's reach, or whether they also show a bias for information that is not associated with the body, suggesting a more generalized attention deficit. Using a temporal order judgement task, we found that patients with complex regional pain syndrome processed visual stimuli more slowly on the affected side (relative to the unaffected side) when the lights were projected onto a blank surface (i.e. when no bodily information was visible), and when the lights were projected onto the dorsal surfaces of their uncrossed hands. However, with the arms crossed (such that the left and right lights projected onto the right and left hands, respectively), patients' responses were no different than controls. These results provide the first demonstration of a generalized attention bias away from the affected side of space in complex regional pain syndrome patients that is not specifically related to bodily information. They also suggest a separate and additional bias of visual attention away from the affected hand. The strength of attention bias was predicted by scores on a self-report measure of body perception distortion; but not by pain intensity, time since diagnosis, or affected body side (left or right). At an individual level, those patients whose upper limbs were most affected had a higher incidence of inattention than those whose lower limbs were most affected. However, at a group level, affected limb (upper or lower) did not predict bias magnitude; nor did three measures designed to assess possible asymmetries in the distribution of movements across space. It is concluded that inattention in near space in complex regional pain syndrome may arise in parallel with a distorted perception of the body.10.1093/brain/awx152_video1awx152media15495542665001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain.

  6. Art expertise modulates the emotional response to modern art, especially abstract: an ERP investigation

    PubMed Central

    Else, Jane E.; Ellis, Jason; Orme, Elizabeth

    2015-01-01

    Art is one of life’s great joys, whether it is beautiful, ugly, sublime or shocking. Aesthetic responses to visual art involve sensory, cognitive and visceral processes. Neuroimaging studies have yielded a wealth of information regarding aesthetic appreciation and beauty using visual art as stimuli, but few have considered the effect of expertise on visual and visceral responses. To study the time course of visual, cognitive and emotional processes in response to visual art we investigated the event-related potentials (ERPs) elicited whilst viewing and rating the visceral affect of three categories of visual art. Two groups, artists and non-artists viewed representational, abstract and indeterminate 20th century art. Early components, particularly the N1, related to attention and effort, and the P2, linked to higher order visual processing, was enhanced for artists when compared to non-artists. This effect was present for all types of art, but further enhanced for abstract art (AA), which was rated as having lowest visceral affect by the non-artists. The later, slow wave processes (500–1000 ms), associated with arousal and sustained attention, also show clear differences between the two groups in response to both type of art and visceral affect. AA increased arousal and sustained attention in artists, whilst it decreased in non-artists. These results suggest that aesthetic response to visual art is affected by both expertise and semantic content. PMID:27242497

  7. Association of Chronic Subjective Tinnitus with Neuro- Cognitive Performance.

    PubMed

    Gudwani, Sunita; Munjal, Sanjay K; Panda, Naresh K; Kohli, Adarsh

    2017-12-01

    Chronic subjective tinnitus is associated with cognitive disruptions affecting perception, thinking, language, reasoning, problem solving, memory, visual tasks (reading) and attention. To evaluate existence of any association between tinnitus parameters and neuropsychological performance to explain cognitive processing. Study design was prospective, consisting 25 patients with idiopathic chronic subjective tinnitus and gave informed consent before planning their treatment. Neuropsychological profile included (i) performance on verbal information, comprehension, arithmetic and digit span; (ii) non-verbal performance for visual pattern completion analogies; (iii) memory performance for long-term, recent, delayed-recall, immediate-recall, verbal-retention, visualretention, visual recognition; (iv) reception, interpretation and execution for visual motor gestalt. Correlation between tinnitus onset duration/ loudness perception with neuropsychological profile was assessed by calculating Spearman's coefficient. Findings suggest that tinnitus may interfere with cognitive processing especially performance on digit span, verbal comprehension, mental balance, attention & concentration, immediate recall, visual recognition and visual-motor gestalt subtests. Negative correlation between neurocognitive tasks with tinnitus loudness and onset duration indicated their association. Positive correlation between tinnitus and visual-motor gestalt performance indicated the brain dysfunction. Tinnitus association with non-auditory processing of verbal, visual and visuo-spatial information suggested neuroplastic changes that need to be targeted in cognitive rehabilitation.

  8. Perceptual factors that influence use of computer enhanced visual displays

    NASA Technical Reports Server (NTRS)

    Littman, David; Boehm-Davis, Debbie

    1993-01-01

    This document is the final report for the NASA/Langley contract entitled 'Perceptual Factors that Influence Use of Computer Enhanced Visual Displays.' The document consists of two parts. The first part contains a discussion of the problem to which the grant was addressed, a brief discussion of work performed under the grant, and several issues suggested for follow-on work. The second part, presented as Appendix I, contains the annual report produced by Dr. Ann Fulop, the Postdoctoral Research Associate who worked on-site in this project. The main focus of this project was to investigate perceptual factors that might affect a pilot's ability to use computer generated information that is projected into the same visual space that contains information about real world objects. For example, computer generated visual information can identify the type of an attacking aircraft, or its likely trajectory. Such computer generated information must not be so bright that it adversely affects a pilot's ability to perceive other potential threats in the same volume of space. Or, perceptual attributes of computer generated and real display components should not contradict each other in ways that lead to problems of accommodation and, thus, distance judgments. The purpose of the research carried out under this contract was to begin to explore the perceptual factors that contribute to effective use of these displays.

  9. Driver memory retention of in-vehicle information system messages

    DOT National Transportation Integrated Search

    1997-01-01

    Memory retention of drivers was tested for traffic- and traveler-related messages displayed on an in-vehicle information system (IVIS). Three research questions were asked: (a) How does in-vehicle visual message format affect comprehension? (b) How d...

  10. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  11. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Learning and Prediction of Slip from Visual Information

    NASA Technical Reports Server (NTRS)

    Angelova, Anelia; Matthies, Larry; Helmick, Daniel; Perona, Pietro

    2007-01-01

    This paper presents an approach for slip prediction from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering such terrain can be very useful for better planning and avoiding these areas. To address this problem, terrain appearance and geometry information about map cells are correlated to the slip measured by the rover while traversing each cell. This relationship is learned from previous experience, so slip can be predicted remotely from visual information only. The proposed method consists of terrain type recognition and nonlinear regression modeling. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The final slip prediction error is about 20%. The system is intended for improved navigation on steep slopes and rough terrain for Mars rovers.

  13. Visual ergonomics in the workplace.

    PubMed

    Anshel, Jeffrey R

    2007-10-01

    This article provides information about visual function and its role in workplace productivity. By understanding the connection among comfort, health, and productivity and knowing the many options for effective ergonomic workplace lighting, the occupational health nurse can be sensitive to potential visual stress that can affect all areas of performance. Computer vision syndrome-the eye and vision problems associated with near work experienced during or related to computer use-is defined and solutions to it are discussed.

  14. Body sway reflects leadership in joint music performance.

    PubMed

    Chang, Andrew; Livingstone, Steven R; Bosnyak, Dan J; Trainor, Laurel J

    2017-05-23

    The cultural and technological achievements of the human species depend on complex social interactions. Nonverbal interpersonal coordination, or joint action, is a crucial element of social interaction, but the dynamics of nonverbal information flow among people are not well understood. We used joint music making in string quartets, a complex, naturalistic nonverbal behavior, as a model system. Using motion capture, we recorded body sway simultaneously in four musicians, which reflected real-time interpersonal information sharing. We used Granger causality to analyze predictive relationships among the motion time series of the players to determine the magnitude and direction of information flow among the players. We experimentally manipulated which musician was the leader (followers were not informed who was leading) and whether they could see each other, to investigate how these variables affect information flow. We found that assigned leaders exerted significantly greater influence on others and were less influenced by others compared with followers. This effect was present, whether or not they could see each other, but was enhanced with visual information, indicating that visual as well as auditory information is used in musical coordination. Importantly, performers' ratings of the "goodness" of their performances were positively correlated with the overall degree of body sway coupling, indicating that communication through body sway reflects perceived performance success. These results confirm that information sharing in a nonverbal joint action task occurs through both auditory and visual cues and that the dynamics of information flow are affected by changing group relationships.

  15. Body sway reflects leadership in joint music performance

    PubMed Central

    Livingstone, Steven R.; Bosnyak, Dan J.; Trainor, Laurel J.

    2017-01-01

    The cultural and technological achievements of the human species depend on complex social interactions. Nonverbal interpersonal coordination, or joint action, is a crucial element of social interaction, but the dynamics of nonverbal information flow among people are not well understood. We used joint music making in string quartets, a complex, naturalistic nonverbal behavior, as a model system. Using motion capture, we recorded body sway simultaneously in four musicians, which reflected real-time interpersonal information sharing. We used Granger causality to analyze predictive relationships among the motion time series of the players to determine the magnitude and direction of information flow among the players. We experimentally manipulated which musician was the leader (followers were not informed who was leading) and whether they could see each other, to investigate how these variables affect information flow. We found that assigned leaders exerted significantly greater influence on others and were less influenced by others compared with followers. This effect was present, whether or not they could see each other, but was enhanced with visual information, indicating that visual as well as auditory information is used in musical coordination. Importantly, performers’ ratings of the “goodness” of their performances were positively correlated with the overall degree of body sway coupling, indicating that communication through body sway reflects perceived performance success. These results confirm that information sharing in a nonverbal joint action task occurs through both auditory and visual cues and that the dynamics of information flow are affected by changing group relationships. PMID:28484007

  16. Task relevance of emotional information affects anxiety-linked attention bias in visual search.

    PubMed

    Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies

    2017-01-01

    Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Visual representation of medical information: the importance of considering the end-user in the design of medical illustrations.

    PubMed

    Scheltema, Emma; Reay, Stephen; Piper, Greg

    2018-01-01

    This practice led research project explored visual representation through illustrations designed to communicate often complex medical information for different users within Auckland City Hospital, New Zealand. Media and tools were manipulated to affect varying degrees of naturalism or abstraction from reality in the creation of illustrations for a variety of real-life clinical projects, and user feedback on illustration preference gathered from both medical professionals and patients. While all users preferred the most realistic representations of medical information from the illustrations presented, patients often favoured illustrations that depicted a greater amount of information than professionals suggested was necessary.

  18. Bio-inspired display of polarization information using selected visual cues

    NASA Astrophysics Data System (ADS)

    Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader

    2003-12-01

    For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.

  19. Cognitive load reducing in destination decision system

    NASA Astrophysics Data System (ADS)

    Wu, Chunhua; Wang, Cong; Jiang, Qien; Wang, Jian; Chen, Hong

    2007-12-01

    With limited cognitive resource, the quantity of information can be processed by a person is limited. If the limitation is broken, the whole cognitive process would be affected, so did the final decision. The research of effective ways to reduce the cognitive load is launched from two aspects: cutting down the number of alternatives and directing the user to allocate his limited attention resource based on the selective visual attention theory. Decision-making is such a complex process that people usually have difficulties to express their requirements completely. An effective method to get user's hidden requirements is put forward in this paper. With more requirements be caught, the destination decision system can filtering more quantity of inappropriate alternatives. Different information piece has different utility, if the information with high utility would get attention easily, the decision might be made more easily. After analyzing the current selective visual attention theory, a new presentation style based on user's visual attention also put forward in this paper. This model arranges information presentation according to the movement of sightline. Through visual attention, the user can put their limited attention resource on the important information. Hidden requirements catching and presenting information based on the selective visual attention are effective ways to reducing the cognitive load.

  20. Deficient multisensory integration in schizophrenia: an event-related potential study.

    PubMed

    Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean

    2013-07-01

    In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.

  1. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  2. Thinking about the Weather: How Display Salience and Knowledge Affect Performance in a Graphic Inference Task

    ERIC Educational Resources Information Center

    Hegarty, Mary; Canham, Matt S.; Fabrikant, Sara I.

    2010-01-01

    Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of…

  3. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  4. Neuropsychological findings associated with Panayiotopoulos syndrome in three children.

    PubMed

    Hodges, Samantha L; Gabriel, Marsha T; Perry, M Scott

    2016-01-01

    Panayiotopoulos syndrome is a common idiopathic benign epilepsy that has a peak age of onset in early childhood. The syndrome is multifocal and shows significant electroencephalogram (EEG) variability, with occipital predominance. Although a benign syndrome often refers to the absence of neurological and neuropsychological deficits, the syndrome has recently been associated with cognitive impairments. Also, despite frequent occipital EEG abnormalities, research regarding the visual functioning of patients is less reported and often contradictory. The purpose of this study was to gain additional knowledge regarding the neurocognitive functioning of patients with Panayiotopoulos syndrome and specifically to address any visual processing deficits associated with the syndrome. Following diagnosis of the syndrome based on typical clinical and electrophysiological criteria, three patients, aged 5, 8, and 10years were referred by epileptologists for neuropsychological evaluation. Neuropsychological findings suggest that the patients had notable impairments on visual memory tasks, especially in comparison with verbal memory. Further, they demonstrated increased difficulty on picture memory suggesting difficulty retaining information from a crowded visual field. Two of the three patients showed weakness in visual processing speed, which may account for weaker retention of complex visual stimuli. Abilities involving attention were normal for all patients, suggesting that inattention is not responsible for these visual deficits. Academically, the patients were weak in numerical operations and spelling, which both rely partially on visual memory and may affect achievement in these areas. Overall, the results suggest that patients with Panayiotopoulos syndrome may have visual processing and visual memory problems that could potentially affect their academic capabilities. Identifying such difficulties may be helpful in creating educational and remedial assistance programs for children with this syndrome, as well as developing appropriate presentation of information to these children in school. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Emotion and working memory: evidence for domain-specific processes for affective maintenance.

    PubMed

    Mikels, Joseph A; Reuter-Lorenz, Patricia A; Beyer, Jonathan A; Fredrickson, Barbara L

    2008-04-01

    Working memory is comprised of separable subsystems for visual and verbal information, but what if the information is affective? Does the maintenance of affective information rely on the same processes that maintain nonaffective information? The authors address this question using a novel delayed-response task developed to investigate the short-term maintenance of affective memoranda. Using selective interference methods the authors find that a secondary emotion-regulation task impaired affect intensity maintenance, whereas secondary cognitive tasks disrupted brightness intensity maintenance, but facilitated affect maintenance. Additionally, performance on the affect maintenance task depends on the valence of the maintained feeling, further supporting the domain-specific nature of the task. The importance of affect maintenance per se is further supported by demonstrating that the observed valence effects depend on a memory delay and are not evident with simultaneous presentation of stimuli. These findings suggest that the working memory system may include domain-specific components that are specialized for the maintenance of affective memoranda. (Copyright) 2008 APA.

  6. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    PubMed

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. On the selection and evaluation of visual display symbology Factors influencing search and identification times

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Williams, Douglas

    1986-01-01

    Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.

  8. The effect of beat frequency on eye movements during free viewing.

    PubMed

    Maróti, Emese; Knakker, Balázs; Vidnyánszky, Zoltán; Weiss, Béla

    2017-02-01

    External periodic stimuli entrain brain oscillations and affect perception and attention. It has been shown that background music can change oculomotor behavior and facilitate detection of visual objects occurring on the musical beat. However, whether musical beats in different tempi modulate information sampling differently during natural viewing remains to be explored. Here we addressed this question by investigating how listening to naturalistic drum grooves in two different tempi affects eye movements of participants viewing natural scenes on a computer screen. We found that the beat frequency of the drum grooves modulated the rate of eye movements: fixation durations were increased at the lower beat frequency (1.7Hz) as compared to the higher beat frequency (2.4Hz) and no music conditions. Correspondingly, estimated visual sampling frequency decreased as fixation durations increased with lower beat frequency. These results imply that slow musical beats can retard sampling of visual information during natural viewing by increasing fixation durations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Predictions penetrate perception: Converging insights from brain, behaviour and disorder

    PubMed Central

    O’Callaghan, Claire; Kveraga, Kestutis; Shine, James M; Adams, Reginald B.; Bar, Moshe

    2018-01-01

    It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains—including object recognition, contextual associations, cognitive biases and affective state—that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception. PMID:27222169

  10. U.S. Geological Survey: A synopsis of Three-dimensional Modeling

    USGS Publications Warehouse

    Jacobsen, Linda J.; Glynn, Pierre D.; Phelps, Geoff A.; Orndorff, Randall C.; Bawden, Gerald W.; Grauch, V.J.S.

    2011-01-01

    The U.S. Geological Survey (USGS) is a multidisciplinary agency that provides assessments of natural resources (geological, hydrological, biological), the disturbances that affect those resources, and the disturbances that affect the built environment, natural landscapes, and human society. Until now, USGS map products have been generated and distributed primarily as 2-D maps, occasionally providing cross sections or overlays, but rarely allowing the ability to characterize and understand 3-D systems, how they change over time (4-D), and how they interact. And yet, technological advances in monitoring natural resources and the environment, the ever-increasing diversity of information needed for holistic assessments, and the intrinsic 3-D/4-D nature of the information obtained increases our need to generate, verify, analyze, interpret, confirm, store, and distribute its scientific information and products using 3-D/4-D visualization, analysis, modeling tools, and information frameworks. Today, USGS scientists use 3-D/4-D tools to (1) visualize and interpret geological information, (2) verify the data, and (3) verify their interpretations and models. 3-D/4-D visualization can be a powerful quality control tool in the analysis of large, multidimensional data sets. USGS scientists use 3-D/4-D technology for 3-D surface (i.e., 2.5-D) visualization as well as for 3-D volumetric analyses. Examples of geological mapping in 3-D include characterization of the subsurface for resource assessments, such as aquifer characterization in the central United States, and for input into process models, such as seismic hazards in the western United States.

  11. Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments.

    PubMed

    Vuksanović, Jasmina; Bjekić, Jovana; Radivojević, Natalija

    2015-08-01

    A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained in Experiment 1 have shown that grammatical gender as a linguistic property of the pseudo-nouns used as names for musical instruments significantly affects people's representations about these instruments. The purpose of Experiment 2 was to examine how the representation of musical instruments will be shaped in the presence of both language and visual information. The results indicate that the co-existence of linguistic and visual information results in formation of concepts about selected instruments by all available information from both sources, thus suggesting that grammatical gender influences nonverbal concepts' forming, but has no privileged status in the matter.

  12. Perceived object stability depends on multisensory estimates of gravity.

    PubMed

    Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H

    2011-04-27

    How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.

  13. The impact of front-of-pack marketing attributes versus nutrition and health information on parents' food choices.

    PubMed

    Georgina Russell, Catherine; Burke, Paul F; Waller, David S; Wei, Edward

    2017-09-01

    Front-of-pack attributes have the potential to affect parents' food choices on behalf of their children and form one avenue through which strategies to address the obesogenic environment can be developed. Previous work has focused on the isolated effects of nutrition and health information (e.g. labeling systems, health claims), and how parents trade off this information against co-occurring marketing features (e.g. product imagery, cartoons) is unclear. A Discrete Choice Experiment was utilized to understand how front-of-pack nutrition, health and marketing attributes, as well as pricing, influenced parents' choices of cereal for their child. Packages varied with respect to the two elements of the Australian Health Star Rating system (stars and nutrient facts panel), along with written claims, product visuals, additional visuals, and price. A total of 520 parents (53% male) with a child aged between five and eleven years were recruited via an online panel company and completed the survey. Product visuals, followed by star ratings, were found to be the most significant attributes in driving choice, while written claims and other visuals were the least significant. Use of the Health Star Rating (HSR) system and other features were related to the child's fussiness level and parents' concerns about their child's weight with parents of fussy children, in particular, being less influenced by the HSR star information and price. The findings suggest that front-of-pack health labeling systems can affect choice when parents trade this information off against marketing attributes, yet some marketing attributes can be more influential, and not all parents utilize this information in the same way. Copyright © 2017. Published by Elsevier Ltd.

  14. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  15. Early Sign Language Experience Goes Along with an Increased Cross-modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users.

    PubMed

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-04-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.

  16. Additive effects of emotional content and spatial selective attention on electrocortical facilitation.

    PubMed

    Keil, Andreas; Moratti, Stephan; Sabatinelli, Dean; Bradley, Margaret M; Lang, Peter J

    2005-08-01

    Affectively arousing visual stimuli have been suggested to automatically attract attentional resources in order to optimize sensory processing. The present study crosses the factors of spatial selective attention and affective content, and examines the relationship between instructed (spatial) and automatic attention to affective stimuli. In addition to response times and error rate, electroencephalographic data from 129 electrodes were recorded during a covert spatial attention task. This task required silent counting of random-dot targets embedded in a 10 Hz flicker of colored pictures presented to both hemifields. Steady-state visual evoked potentials (ssVEPs) were obtained to determine amplitude and phase of electrocortical responses to pictures. An increase of ssVEP amplitude was observed as an additive function of spatial attention and emotional content. Statistical parametric mapping of this effect indicated occipito-temporal and parietal cortex activation contralateral to the attended visual hemifield in ssVEP amplitude modulation. This difference was most pronounced during selection of the left visual hemifield, at right temporal electrodes. In line with this finding, phase information revealed accelerated processing of aversive arousing, compared to affectively neutral pictures. The data suggest that affective stimulus properties modulate the spatiotemporal process along the ventral stream, encompassing amplitude amplification and timing changes of posterior and temporal cortex.

  17. Postural and Spatial Orientation Driven by Virtual Reality

    PubMed Central

    Keshner, Emily A.; Kenyon, Robert V.

    2009-01-01

    Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world. PMID:19592796

  18. Prestimulus neural oscillations inhibit visual perception via modulation of response gain.

    PubMed

    Chaumon, Maximilien; Busch, Niko A

    2014-11-01

    The ongoing state of the brain radically affects how it processes sensory information. How does this ongoing brain activity interact with the processing of external stimuli? Spontaneous oscillations in the alpha range are thought to inhibit sensory processing, but little is known about the psychophysical mechanisms of this inhibition. We recorded ongoing brain activity with EEG while human observers performed a visual detection task with stimuli of different contrast intensities. To move beyond qualitative description, we formally compared psychometric functions obtained under different levels of ongoing alpha power and evaluated the inhibitory effect of ongoing alpha oscillations in terms of contrast or response gain models. This procedure opens the way to understanding the actual functional mechanisms by which ongoing brain activity affects visual performance. We found that strong prestimulus occipital alpha oscillations-but not more anterior mu oscillations-reduce performance most strongly for stimuli of the highest intensities tested. This inhibitory effect is best explained by a divisive reduction of response gain. Ongoing occipital alpha oscillations thus reflect changes in the visual system's input/output transformation that are independent of the sensory input to the system. They selectively scale the system's response, rather than change its sensitivity to sensory information.

  19. The visual impact of gossip.

    PubMed

    Anderson, Eric; Siegel, Erika H; Bliss-Moreau, Eliza; Barrett, Lisa Feldman

    2011-06-17

    Gossip is a form of affective information about who is friend and who is foe. We show that gossip does not influence only how a face is evaluated--it affects whether a face is seen in the first place. In two experiments, neutral faces were paired with negative, positive, or neutral gossip and were then presented alone in a binocular rivalry paradigm (faces were presented to one eye, houses to the other). In both studies, faces previously paired with negative (but not positive or neutral) gossip dominated longer in visual consciousness. These findings demonstrate that gossip, as a potent form of social affective learning, can influence vision in a completely top-down manner, independent of the basic structural features of a face.

  20. Saliency affects feedforward more than feedback processing in early visual cortex.

    PubMed

    Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony

    2013-07-01

    Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Recognition of visual stimuli and memory for spatial context in schizophrenic patients and healthy volunteers.

    PubMed

    Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh

    2004-11-01

    Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.

  2. Developmental dyslexia and vision

    PubMed Central

    Quercia, Patrick; Feiss, Léonard; Michel, Carine

    2013-01-01

    Developmental dyslexia affects almost 10% of school-aged children and represents a significant public health problem. Its etiology is unknown. The consistent presence of phonological difficulties combined with an inability to manipulate language sounds and the grapheme–phoneme conversion is widely acknowledged. Numerous scientific studies have also documented the presence of eye movement anomalies and deficits of perception of low contrast, low spatial frequency, and high frequency temporal visual information in dyslexics. Anomalies of visual attention with short visual attention spans have also been demonstrated in a large number of cases. Spatial orientation is also affected in dyslexics who manifest a preference for spatial attention to the right. This asymmetry may be so pronounced that it leads to a veritable neglect of space on the left side. The evaluation of treatments proposed to dyslexics whether speech or oriented towards the visual anomalies remains fragmentary. The advent of new explanatory theories, notably cerebellar, magnocellular, or proprioceptive, is an incentive for ophthalmologists to enter the world of multimodal cognition given the importance of the eye’s visual input. PMID:23690677

  3. Macroscopic brain dynamics during verbal and pictorial processing of affective stimuli.

    PubMed

    Keil, Andreas

    2006-01-01

    Emotions can be viewed as action dispositions, preparing an individual to act efficiently and successfully in situations of behavioral relevance. To initiate optimized behavior, it is essential to accurately process the perceptual elements indicative of emotional relevance. The present chapter discusses effects of affective content on neural and behavioral parameters of perception, across different information channels. Electrocortical data are presented from studies examining affective perception with pictures and words in different task contexts. As a main result, these data suggest that sensory facilitation has an important role in affective processing. Affective pictures appear to facilitate perception as a function of emotional arousal at multiple levels of visual analysis. If the discrimination between affectively arousing vs. nonarousing content relies on fine-grained differences, amplification of the cortical representation may occur as early as 60-90 ms after stimulus onset. Affectively arousing information as conveyed via visual verbal channels was not subject to such very early enhancement. However, electrocortical indices of lexical access and/or activation of semantic networks showed that affectively arousing content may enhance the formation of semantic representations during word encoding. It can be concluded that affective arousal is associated with activation of widespread networks, which act to optimize sensory processing. On the basis of prioritized sensory analysis for affectively relevant stimuli, subsequent steps such as working memory, motor preparation, and action may be adjusted to meet the adaptive requirements of the situation perceived.

  4. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  5. Auditory perception in the aging brain: the role of inhibition and facilitation in early processing.

    PubMed

    Stothart, George; Kazanina, Nina

    2016-11-01

    Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    NASA Astrophysics Data System (ADS)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  7. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  8. The Effect of Verbal Contextual Information in Processing Visual Art.

    ERIC Educational Resources Information Center

    Koroscik, Judith S.; And Others

    1985-01-01

    Verbal contextual information affected photography and nonphotography students' performance on semantic retention tests. For example, correct titles aided the formation and retention of accurate memories, while erroneous titles misled students into remembering meanings that had relatively little to do with what was actually pictured in the…

  9. The Influence of Textbook Format on Postsecondary Proficient and Remedial Readers: Designing Information Using Visual Language

    ERIC Educational Resources Information Center

    Tetlan, W. Lou

    2009-01-01

    This study examined whether the design of textbook material affects comprehension and memory of textbook material under certain cognitive conditions for proficient and remedial readers. Using quantitative and qualitative research methods, format was found to significantly affect comprehension and memory. Proficient Male scored significantly…

  10. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  11. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  12. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.

  13. Basic abnormalities in visual processing affect face processing at an early age in autism spectrum disorder.

    PubMed

    Vlamings, Petra Hendrika Johanna Maria; Jonkman, Lisa Marthe; van Daalen, Emma; van der Gaag, Rutger Jan; Kemner, Chantal

    2010-12-15

    A detailed visual processing style has been noted in autism spectrum disorder (ASD); this contributes to problems in face processing and has been directly related to abnormal processing of spatial frequencies (SFs). Little is known about the early development of face processing in ASD and the relation with abnormal SF processing. We investigated whether young ASD children show abnormalities in low spatial frequency (LSF, global) and high spatial frequency (HSF, detailed) processing and explored whether these are crucially involved in the early development of face processing. Three- to 4-year-old children with ASD (n = 22) were compared with developmentally delayed children without ASD (n = 17). Spatial frequency processing was studied by recording visual evoked potentials from visual brain areas while children passively viewed gratings (HSF/LSF). In addition, children watched face stimuli with different expressions, filtered to include only HSF or LSF. Enhanced activity in visual brain areas was found in response to HSF versus LSF information in children with ASD, in contrast to control subjects. Furthermore, facial-expression processing was also primarily driven by detail in ASD. Enhanced visual processing of detailed (HSF) information is present early in ASD and occurs for neutral (gratings), as well as for socially relevant stimuli (facial expressions). These data indicate that there is a general abnormality in visual SF processing in early ASD and are in agreement with suggestions that a fast LSF subcortical face processing route might be affected in ASD. This could suggest that abnormal visual processing is causative in the development of social problems in ASD. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  14. Effects of aging on whole body and segmental control while obstacle crossing under impaired sensory conditions.

    PubMed

    Novak, Alison C; Deshpande, Nandini

    2014-06-01

    The ability to safely negotiate obstacles is an important component of independent mobility, requiring adaptive locomotor responses to maintain dynamic balance. This study examined the effects of aging and visual-vestibular interactions on whole-body and segmental control during obstacle crossing. Twelve young and 15 older adults walked along a straight pathway and stepped over one obstacle placed in their path. The task was completed under 4 conditions which included intact or blurred vision, and intact or perturbed vestibular information using galvanic vestibular stimulation (GVS). Global task performance significantly increased under suboptimal vision conditions. Vision also significantly influenced medial-lateral center of mass displacement, irrespective of age and GVS. Older adults demonstrated significantly greater trunk pitch and head roll angles under suboptimal vision conditions. Similar to whole-body control, no GVS effect was found for any measures of segmental control. The results indicate a significant reliance on visual but not vestibular information for locomotor control during obstacle crossing. The lack of differences in GVS effects suggests that vestibular information is not up-regulated for obstacle avoidance. This is not differentially affected by aging. In older adults, insufficient visual input appears to affect ability to minimize anterior-posterior trunk movement despite a slower obstacle crossing time and walking speed. Combined with larger medial-lateral deviation of the body COM with insufficient visual information, the older adults may be at a greater risk for imbalance or inability to recover from a possible trip when stepping over an obstacle. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Effects of complete monocular deprivation in visuo-spatial memory.

    PubMed

    Cattaneo, Zaira; Merabet, Lotfi B; Bhatt, Ela; Vecchi, Tomaso

    2008-09-30

    Monocular deprivation has been associated with both specific deficits and enhancements in visual perception and processing. In this study, performance on a visuo-spatial memory task was compared in congenitally monocular individuals and sighted control individuals viewing monocularly (i.e., patched) and binocularly. The task required the individuals to view and memorize a series of target locations on two-dimensional matrices. Overall, congenitally monocular individuals performed worse than sighted individuals (with a specific deficit in simultaneously maintaining distinct spatial representations in memory), indicating that the lack of binocular visual experience affects the way visual information is represented in visuo-spatial memory. No difference was observed between the monocular and binocular viewing control groups, suggesting that early monocular deprivation affects the development of cortical mechanisms mediating visuo-spatial cognition.

  16. Visual attention and emotional reactions to negative stimuli: The role of age and cognitive reappraisal.

    PubMed

    Wirth, Maria; Isaacowitz, Derek M; Kunzmann, Ute

    2017-09-01

    Prominent life span theories of emotion propose that older adults attend less to negative emotional information and report less negative emotional reactions to the same information than younger adults do. Although parallel age differences in affective information processing and age differences in emotional reactivity have been proposed, they have rarely been investigated within the same study. In this eye-tracking study, we tested age differences in visual attention and emotional reactivity, using standardized emotionally negative stimuli. Additionally, we investigated age differences in the association between visual attention and emotional reactivity, and whether these are moderated by cognitive reappraisal. Older as compared with younger adults showed fixation patterns away from negative image content, while they reacted with greater negative emotions. The association between visual attention and emotional reactivity differed by age group and positive reappraisal. Younger adults felt better when they attended more to negative content rather than less, but this relationship only held for younger adults who did not attach a positive meaning to the negative situation. For older adults, overall, there was no significant association between visual attention and emotional reactivity. However, for older adults who did not use positive reappraisal, decreases in attention to negative information were associated with less negative emotions. The present findings point to a complex relationship between younger and older adults' visual attention and emotional reactions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.

    PubMed

    Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A

    2014-08-01

    The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.

  18. Investigating the Impact of Cognitive Style on Multimedia Learners' Understanding and Visual Search Patterns: An Eye-Tracking Approach

    ERIC Educational Resources Information Center

    Liu, Han-Chin

    2018-01-01

    Multimedia students' dependence on information from the outside world can have an impact on their ability to identify and locate information from multiple resources in learning environments and thereby affect the construction of mental models. Field dependence-independence has been used to assess the ability to extract essential information from…

  19. The Visual Impact of Gossip

    PubMed Central

    Anderson, Eric; Siegel, Erika H.; Bliss-Moreau, Eliza; Barrett, Lisa Feldman

    2011-01-01

    Gossip is a form of affective information about who is friend and who is foe. We show that gossip does not impact only how a face is evaluated—it affects whether a face is seen in the first place. In two experiments, neutral faces were paired with negative, positive, or neutral gossip and were then presented alone in a binocular rivalry paradigm (faces were presented to one eye, houses to the other). In both studies, faces previously paired with negative (but not positive or neutral) gossip dominated longer in visual consciousness. These findings demonstrate that gossip, as a potent form of social affective learning, can influence vision in a completely top-down manner, independent of the basic structural features of a face. PMID:21596956

  20. [Information processing speed and influential factors in multiple sclerosis].

    PubMed

    Zhang, M L; Xu, E H; Dong, H Q; Zhang, J W

    2016-04-19

    To study the information processing speed and the influential factors in multiple sclerosis (MS) patients. A total of 36 patients with relapsing-remitting MS (RRMS), 21 patients with secondary progressive MS (SPMS), and 50 healthy control subjects from Xuanwu Hospital of Capital Medical University between April 2010 and April 2012 were included into this cross-sectional study.Neuropsychological tests was conducted after the disease had been stable for 8 weeks, including information processing speed, memory, executive functions, language and visual perception.Correlation between information processing speed and depression, fatigue, Expanded Disability Status Scale (EDSS) were studied. (1)MS patient groups demonstrated cognitive deficits compared to healthy controls.The Symbol Digit Modalities Test (SDMT) (control group 57±12; RRMS group 46±17; SPMS group 35±10, P<0.05) and Paced Auditory Serial Addition Task (PASAT) (control group 85±18; RRMS group 77±20; SPMS group 57±20, P<0.05) impaired most.SPMS patients were more affected compared to patients with RRMS subtypes, and these differences were attenuated after control for physical disability level as measured by the EDSS scores.MS patients, especially SPMS subtype, were more severely impaired than control group in the verbal learning test, verbal fluency, Stroop C test planning time, while visual-spatial function and visual memory were relatively reserved in MS patients.(2) According to the Pearson univariate correlation analysis, age, depression, EDSS scores and fatigue were related with PASAT and SDMT tests (r=-0.41--0.61, P<0.05). Depression significantly affected the speed of information processing (P<0.05). Impairment of information processing speed, verbal memory and executive functioning are seen in MS patients, especially in SPMS subtype, while visual-spatial function is relatively reserved.Age, white matter change scales, EDSS scores, depression are negatively associated with information processing speed.

  1. On the effects of multimodal information integration in multitasking.

    PubMed

    Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian

    2017-07-07

    There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).

  2. Location and orientation of panel on the screen as a structural visual element to highlight text displayed

    NASA Astrophysics Data System (ADS)

    Léger, Laure; Chevalier, Aline

    2017-07-01

    Searching for information on the internet has become a daily activity. It is considered to be a complex cognitive activity that involves visual attention. Many studies have demonstrated that users' information search are affected both by the spatial configuration of words and the elements displayed on the screen: elements that are used to structure web pages. One of these elements, the web panel, contains information. Web panel is a rectangular area with a colored background that was used to highlighting content presented in this specific rectangular area. Our general hypothesis was that the presence of a panel on a web page would affect the structure of a word display, as a result, information search accuracy. We carried out an experiment in which we manipulated the presence vs. the absence of a panel, as well as its orientation on the screen (vertical vs. horizontal). Twenty participants were asked to answer questions while their eye movements were recorded. Results showed that the presence of a panel resulted in reduced accuracy and shorter response times. Panel orientation affected scanpaths, especially when they were orientated vertically. We discuss these findings and suggest ways in which this research could be developed further in future.

  3. Location perception: the X-Files parable.

    PubMed

    Prinzmetal, William

    2005-01-01

    Three aspects of visual object location were investigated: (1) how the visual system integrates information for locating objects, (2) how attention operates to affect location perception, and (3) how the visual system deals with locating an object when multiple objects are present. The theories were described in terms of a parable (the X-Files parable). Then, computer simulations were developed. Finally, predictions derived from the simulations were tested. In the scenario described in the parable, we ask how a system of detectors might locate an alien spaceship, how attention might be implemented in such a spaceship detection system, and how the presence of one spaceship might influence the location perception of another alien spaceship. Experiment 1 demonstrated that location information is integrated with a spatial average rule. In Experiment 2, this rule was applied to a more-samples theory of attention. Experiment 3 demonstrated how the integration rule could account for various visual illusions.

  4. Location cue validity affects inhibition of return of visual processing.

    PubMed

    Wright, R D; Richard, C M

    2000-01-01

    Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.

  5. Visual function, driving safety, and the elderly.

    PubMed

    Keltner, J L; Johnson, C A

    1987-09-01

    The authors have conducted a survey of the Departments of Motor Vehicles in all 50 states, the District of Columbia, and Puerto Rico requesting information about the visual standards, accidents, and conviction rates for different age groups. In addition, we have reviewed the literature on visual function and traffic safety. Elderly drivers have a greater number of vision problems that affect visual acuity and/or peripheral visual fields. Although the elderly are responsible for a small percentage of the total number of traffic accidents, the types of accidents they are involved in (e.g., failure to yield the right-of-way, intersection collisions, left turns onto crossing streets) may be related to peripheral and central visual field problems. Because age-related changes in performance occur at different rates for various individuals, licensing of the elderly driver should be based on functional abilities rather than age. Based on information currently available, we can make the following recommendations: (1) periodic evaluations of visual acuity and visual fields should be performed every 1 to 2 years in the population over age 65; (2) drivers of any age with multiple accidents or moving violations should have visual acuity and visual fields evaluated; and (3) a system should be developed for physicians to report patients with potentially unsafe visual function. The authors believe that these recommendations may help to reduce the number of traffic accidents that result from peripheral visual field deficits.

  6. Spatiotemporal Object History Affects the Selection of Task-Relevant Properties

    ERIC Educational Resources Information Center

    Schreij, Daniel; Olivers, Christian N. L.

    2013-01-01

    For stable perception, we maintain mental representations of objects across space and time. What information is linked to such a representation? In this study, we extended our work showing that the spatiotemporal history of an object affects the way the object is attended the next time it is encountered. Observers conducted a visual search for a…

  7. Assessment and Therapeutic Application of the Expressive Therapies Continuum: Implications for Brain Structures and Functions

    ERIC Educational Resources Information Center

    Lusebrink, Vija B.

    2010-01-01

    The Expressive Therapies Continuum (ETC) provides a theoretical model for art-based assessments and applications of media in art therapy. The three levels of the ETC (Kinesthetic/Sensory, Perceptual/Affective, and Cognitive/Symbolic) appear to reflect different functions and structures in the brain that process visual and affective information.…

  8. Effect of Information Load and Time on Observational Learning

    ERIC Educational Resources Information Center

    Breslin, Gavin; Hodges, Nicola J.; Williams, A. Mark

    2009-01-01

    We examined whether altering the amount of and moment when visual information is presented affected observational learning for participants practicing a bowling skill. On Day 1, four groups practiced a cricket bowling action. Three groups viewed a full-body point-light model, the model's bowling arm, or between-limb coordination of the model's…

  9. The contents of visual working memory reduce uncertainty during visual search.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2011-05-01

    Information held in visual working memory (VWM) influences the allocation of attention during visual search, with targets matching the contents of VWM receiving processing benefits over those that do not. Such an effect could arise from multiple mechanisms: First, it is possible that the contents of working memory enhance the perceptual representation of the target. Alternatively, it is possible that when a target is presented among distractor items, the contents of working memory operate postperceptually to reduce uncertainty about the location of the target. In both cases, a match between the contents of VWM and the target should lead to facilitated processing. However, each effect makes distinct predictions regarding set-size manipulations; whereas perceptual enhancement accounts predict processing benefits regardless of set size, uncertainty reduction accounts predict benefits only with set sizes larger than 1, when there is uncertainty regarding the target location. In the present study, in which briefly presented, masked targets were presented in isolation, there was a negligible effect of the information held in VWM on target discrimination. However, in displays containing multiple masked items, information held in VWM strongly affected target discrimination. These results argue that working memory representations act at a postperceptual level to reduce uncertainty during visual search.

  10. Distinct populations of neurons respond to emotional valence and arousal in the human subthalamic nucleus.

    PubMed

    Sieger, Tomáš; Serranová, Tereza; Růžička, Filip; Vostatek, Pavel; Wild, Jiří; Štastná, Daniela; Bonnet, Cecilia; Novák, Daniel; Růžička, Evžen; Urgošík, Dušan; Jech, Robert

    2015-03-10

    Both animal studies and studies using deep brain stimulation in humans have demonstrated the involvement of the subthalamic nucleus (STN) in motivational and emotional processes; however, participation of this nucleus in processing human emotion has not been investigated directly at the single-neuron level. We analyzed the relationship between the neuronal firing from intraoperative microrecordings from the STN during affective picture presentation in patients with Parkinson's disease (PD) and the affective ratings of emotional valence and arousal performed subsequently. We observed that 17% of neurons responded to emotional valence and arousal of visual stimuli according to individual ratings. The activity of some neurons was related to emotional valence, whereas different neurons responded to arousal. In addition, 14% of neurons responded to visual stimuli. Our results suggest the existence of neurons involved in processing or transmission of visual and emotional information in the human STN, and provide evidence of separate processing of the affective dimensions of valence and arousal at the level of single neurons as well.

  11. Topographic contribution of early visual cortex to short-term memory consolidation: a transcranial magnetic stimulation study.

    PubMed

    van de Ven, Vincent; Jacobs, Christianne; Sack, Alexander T

    2012-01-04

    The neural correlates for retention of visual information in visual short-term memory are considered separate from those of sensory encoding. However, recent findings suggest that sensory areas may play a role also in short-term memory. We investigated the functional relevance, spatial specificity, and temporal characteristics of human early visual cortex in the consolidation of capacity-limited topographic visual memory using transcranial magnetic stimulation (TMS). Topographically specific TMS pulses were delivered over lateralized occipital cortex at 100, 200, or 400 ms into the retention phase of a modified change detection task with low or high memory loads. For the high but not the low memory load, we found decreased memory performance for memory trials in the visual field contralateral, but not ipsilateral to the side of TMS, when pulses were delivered at 200 ms into the retention interval. A behavioral version of the TMS experiment, in which a distractor stimulus (memory mask) replaced the TMS pulses, further corroborated these findings. Our findings suggest that retinotopic visual cortex contributes to the short-term consolidation of topographic visual memory during early stages of the retention of visual information. Further, TMS-induced interference decreased the strength (amplitude) of the memory representation, which most strongly affected the high memory load trials.

  12. Language identification from visual-only speech signals

    PubMed Central

    Ronquest, Rebecca E.; Levi, Susannah V.; Pisoni, David B.

    2010-01-01

    Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the language-identification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification. PMID:20675804

  13. Visual Reliance for Balance Control in Older Adults Persists When Visual Information Is Disrupted by Artificial Feedback Delays

    PubMed Central

    Balasubramaniam, Ramesh

    2014-01-01

    Sensory information from our eyes, skin and muscles helps guide and correct balance. Less appreciated, however, is that delays in the transmission of sensory information between our eyes, limbs and central nervous system can exceed several 10s of milliseconds. Investigating how these time-delayed sensory signals influence balance control is central to understanding the postural system. Here, we investigate how delayed visual feedback and cognitive performance influence postural control in healthy young and older adults. The task required that participants position their center of pressure (COP) in a fixed target as accurately as possible without visual feedback about their COP location (eyes-open balance), or with artificial time delays imposed on visual COP feedback. On selected trials, the participants also performed a silent arithmetic task (cognitive dual task). We separated COP time series into distinct frequency components using low and high-pass filtering routines. Visual feedback delays affected low frequency postural corrections in young and older adults, with larger increases in postural sway noted for the group of older adults. In comparison, cognitive performance reduced the variability of rapid center of pressure displacements in young adults, but did not alter postural sway in the group of older adults. Our results demonstrate that older adults prioritize vision to control posture. This visual reliance persists even when feedback about the task is delayed by several hundreds of milliseconds. PMID:24614576

  14. Is visual short-term memory depthful?

    PubMed

    Reeves, Adam; Lei, Quan

    2014-03-01

    Does visual short-term memory (VSTM) depend on depth, as it might be if information was stored in more than one depth layer? Depth is critical in natural viewing and might be expected to affect retention, but whether this is so is currently unknown. Cued partial reports of letter arrays (Sperling, 1960) were measured up to 700 ms after display termination. Adding stereoscopic depth hardly affected VSTM capacity or decay inferred from total errors. The pattern of transposition errors (letters reported from an uncued row) was almost independent of depth and cue delay. We conclude that VSTM is effectively two-dimensional. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A Comparative Study on the Visual Perceptions of Children with Attention Deficit Hyperactivity Disorder

    NASA Astrophysics Data System (ADS)

    Ahmetoglu, Emine; Aral, Neriman; Butun Ayhan, Aynur

    This study was conducted in order to (a) compare the visual perceptions of seven-year-old children diagnosed with attention deficit hyperactivity disorder with those of normally developing children of the same age and development level and (b) determine whether the visual perceptions of children with attention deficit hyperactivity disorder vary with respect to gender, having received preschool education and parents` educational level. A total of 60 children, 30 with attention deficit hyperactivity disorder and 30 with normal development, were assigned to the study. Data about children with attention deficit hyperactivity disorder and their families was collected by using a General Information Form and the visual perception of children was examined through the Frostig Developmental Test of Visual Perception. The Mann-Whitney U-test and Kruskal-Wallis variance analysis was used to determine whether there was a difference of between the visual perceptions of children with normal development and those diagnosed with attention deficit hyperactivity disorder and to discover whether the variables of gender, preschool education and parents` educational status affected the visual perceptions of children with attention deficit hyperactivity disorder. The results showed that there was a statistically meaningful difference between the visual perceptions of the two groups and that the visual perceptions of children with attention deficit hyperactivity disorder were affected meaningfully by gender, preschool education and parents` educational status.

  16. Seeing the Invisible: Educating the Public on Planetary Magnetic Fields and How they Affect Atmospheres

    NASA Astrophysics Data System (ADS)

    Fillingim, M. O.; Brain, D. A.; Peticolas, L. M.; Schultz, G.; Yan, D.; Guevara, S.; Randol, S.

    2009-12-01

    Magnetic fields and charged particles are difficult for school children, the general public, and scientists alike to visualize. But studies of planetary magnetospheres and ionospheres have broad implications for planetary evolution, from the deep interior to the ancient climate, that are important to communicate to each of these audiences. This presentation will highlight the visualization materials that we are developing to educate audiences on the magnetic fields of planets and how they affect atmospheres. The visualization materials that we are developing consist of simplified data sets that can be displayed on spherical projection systems and portable 3-D rigid models of planetary magnetic fields.We are developing presentations for science museums and classrooms that relate fundamental information about the Martian magnetic field, how it differs from Earth’s, and why the differences are significant.

  17. Visual cues and listening effort: individual variability.

    PubMed

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  18. Anchoring in Numeric Judgments of Visual Stimuli

    PubMed Central

    Langeborg, Linda; Eriksson, Mårten

    2016-01-01

    This article investigates effects of anchoring in age estimation and estimation of quantities, two tasks which to different extents are based on visual stimuli. The results are compared to anchoring in answers to classic general knowledge questions that rely on semantic knowledge. Cognitive load was manipulated to explore possible differences between domains. Effects of source credibility, manipulated by differing instructions regarding the selection of anchor values (no information regarding anchor selection, information that the anchors are randomly generated or information that the anchors are answers from an expert) on anchoring were also investigated. Effects of anchoring were large for all types of judgments but were not affected by cognitive load or by source credibility in either one of the researched domains. A main effect of cognitive load on quantity estimations and main effects of source credibility in the two visually based domains indicate that the manipulations were efficient. Implications for theoretical explanations of anchoring are discussed. In particular, because anchoring did not interact with cognitive load, the results imply that the process behind anchoring in visual tasks is predominantly automatic and unconscious. PMID:26941684

  19. Organizational strategy influence on visual memory performance after stroke: cortical/subcortical and left/right hemisphere contrasts.

    PubMed

    Lange, G; Waked, W; Kirshblum, S; DeLuca, J

    2000-01-01

    To examine how organizational strategy at encoding influences visual memory performance in stroke patients. Case control study. Postacute rehabilitation hospital. Stroke patients with right hemisphere damage (n = 20) versus left hemisphere damage (n = 15), and stroke patients with cortical damage (n = 11) versus subcortical damage (n = 19). Organizational strategy scores, recall performance on the Rey-Osterrieth Complex Figure (ROCF). Results demonstrated significantly greater organizational impairment and less accurate copy performance (i.e., encoding of visuospatial information on the ROCF) in the right compared to the left hemisphere group, and in the cortical relative to the subcortical group. Organizational strategy and copy accuracy scores were significantly related to each other. The absolute amount of immediate and delayed recall was significantly associated with poor organizational strategy scores. However, relative to the amount of visual information originally encoded, memory performances did not differ between groups. These findings suggest that visual memory impairments after stroke may be caused by a lack of organizational strategy affecting information encoding, rather than an impairment in memory storage or retrieval.

  20. Always look on the broad side of life: happiness increases the breadth of sensory memory.

    PubMed

    Kuhbandner, Christof; Lichtenfeld, Stephanie; Pekrun, Reinhard

    2011-08-01

    Research has shown that positive affect increases the breadth of information processing at several higher stages of information processing, such as attentional selection or knowledge activation. In the present study, we examined whether these affective influences are already present at the level of transiently storing incoming information in sensory memory, before attentional selection takes place. After inducing neutral, happy, or sad affect, participants performed an iconic memory task which measures visual sensory memory. In all conditions, iconic memory performance rapidly decreased with increasing delay between stimulus presentation and test, indicating that affect did not influence the decay of iconic memory. However, positive affect increased the amount of incoming information stored in iconic memory. In particular, our results showed that this occurs due to an elimination of the spatial bias typically observed in iconic memory. Whereas performance did not differ at positions where observers in the neutral and negative conditions showed the highest performance, positive affect enhanced performance at all positions where observers in the neutral and negative conditions were relatively "blind." These findings demonstrate that affect influences the breadth of information processing already at earliest processing stages, suggesting that affect may produce an even more fundamental shift in information processing than previously believed. 2011 APA, all rights reserved

  1. Refractive Errors Affect the Vividness of Visual Mental Images

    PubMed Central

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception. PMID:23755186

  2. Refractive errors affect the vividness of visual mental images.

    PubMed

    Palermo, Liana; Nori, Raffaella; Piccardi, Laura; Zeri, Fabrizio; Babino, Antonio; Giusberti, Fiorella; Guariglia, Cecilia

    2013-01-01

    The hypothesis that visual perception and mental imagery are equivalent has never been explored in individuals with vision defects not preventing the visual perception of the world, such as refractive errors. Refractive error (i.e., myopia, hyperopia or astigmatism) is a condition where the refracting system of the eye fails to focus objects sharply on the retina. As a consequence refractive errors cause blurred vision. We subdivided 84 individuals according to their spherical equivalent refraction into Emmetropes (control individuals without refractive errors) and Ametropes (individuals with refractive errors). Participants performed a vividness task and completed a questionnaire that explored their cognitive style of thinking before their vision was checked by an ophthalmologist. Although results showed that Ametropes had less vivid mental images than Emmetropes this did not affect the development of their cognitive style of thinking; in fact, Ametropes were able to use both verbal and visual strategies to acquire and retrieve information. Present data are consistent with the hypothesis of equivalence between imagery and perception.

  3. Accessible engineering drawings for visually impaired machine operators.

    PubMed

    Ramteke, Deepak; Kansal, Gayatri; Madhab, Benu

    2014-01-01

    An engineering drawing provides manufacturing information to a machine operator. An operator plans and executes machining operations based on this information. A visually impaired (VI) operator does not have direct access to the drawings. Drawing information is provided to them verbally or by using sample parts. Both methods have limitations that affect the quality of output. Use of engineering drawings is a standard practice for every industry; this hampers employment of a VI operator. Accessible engineering drawings are required to increase both independence, as well as, employability of VI operators. Today, Computer Aided Design (CAD) software is used for making engineering drawings, which are saved in CAD files. Required information is extracted from the CAD files and converted into Braille or voice. The authors of this article propose a method to make engineering drawings information directly accessible to a VI operator.

  4. Axonal Conduction Delays, Brain State, and Corticogeniculate Communication

    PubMed Central

    2017-01-01

    Thalamocortical conduction times are short, but layer 6 corticothalamic axons display an enormous range of conduction times, some exceeding 40–50 ms. Here, we investigate (1) how axonal conduction times of corticogeniculate (CG) neurons are related to the visual information conveyed to the thalamus, and (2) how alert versus nonalert awake brain states affect visual processing across the spectrum of CG conduction times. In awake female Dutch-Belted rabbits, we found 58% of CG neurons to be visually responsive, and 42% to be unresponsive. All responsive CG neurons had simple, orientation-selective receptive fields, and generated sustained responses to stationary stimuli. CG axonal conduction times were strongly related to modulated firing rates (F1 values) generated by drifting grating stimuli, and their associated interspike interval distributions, suggesting a continuum of visual responsiveness spanning the spectrum of axonal conduction times. CG conduction times were also significantly related to visual response latency, contrast sensitivity (C-50 values), directional selectivity, and optimal stimulus velocity. Increasing alertness did not cause visually unresponsive CG neurons to become responsive and did not change the response linearity (F1/F0 ratios) of visually responsive CG neurons. However, for visually responsive CG neurons, increased alertness nearly doubled the modulated response amplitude to optimal visual stimulation (F1 values), significantly shortened response latency, and dramatically increased response reliability. These effects of alertness were uniform across the broad spectrum of CG axonal conduction times. SIGNIFICANCE STATEMENT Corticothalamic neurons of layer 6 send a dense feedback projection to thalamic nuclei that provide input to sensory neocortex. While sensory information reaches the cortex after brief thalamocortical axonal delays, corticothalamic axons can exhibit conduction delays of <2 ms to 40–50 ms. Here, in the corticogeniculate visual system of awake rabbits, we investigate the functional significance of this axonal diversity, and the effects of shifting alert/nonalert brain states on corticogeniculate processing. We show that axonal conduction times are strongly related to multiple visual response properties, suggesting a continuum of visual responsiveness spanning the spectrum of corticogeniculate axonal conduction times. We also show that transitions between awake brain states powerfully affect corticogeniculate processing, in some ways more strongly than in layer 4. PMID:28559382

  5. The effects of exposure to dynamic expressions of affect on 5-month-olds' memory.

    PubMed

    Flom, Ross; Janis, Rebecca B; Garcia, Darren J; Kirwan, C Brock

    2014-11-01

    The purpose of this study was to examine the behavioral effects of adults' communicated affect on 5-month-olds' visual recognition memory. Five-month-olds were exposed to a dynamic and bimodal happy, angry, or neutral affective (face-voice) expression while familiarized to a novel geometric image. After familiarization to the geometric image and exposure to the affective expression, 5-month-olds received either a 5-min or 1-day retention interval. Following the 5-min retention interval, infants exposed to the happy affective expressions showed a reliable preference for a novel geometric image compared to the recently familiarized image. Infants exposed to the neutral or angry affective expression failed to show a reliable preference following a 5-min delay. Following the 1-day retention interval, however, infants exposed to the neutral expression showed a reliable preference for the novel geometric image. These results are the first to demonstrate that 5-month-olds' visual recognition memory is affected by the presentation of affective information at the time of encoding. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Dissociable meta-analytic brain networks contribute to coordinated emotional processing.

    PubMed

    Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R

    2018-06-01

    Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.

  7. Powerful Feelings: Exploring the Affective Domain of Informal and Arts-Based Learning

    ERIC Educational Resources Information Center

    Lawrence, Randee Lipson

    2008-01-01

    This article looks at the ways in which people learn informally through artistic expression such as dance, drama, poetry, music, literature, film, and all of the visual arts and how people access this learning through their emotions. The author begins with a look at the limitations of relying primarily on technical-rational learning processes.…

  8. Individuals with 22q11.2 Deletion Syndrome Are Impaired at Explicit, but Not Implicit, Discrimination of Local Forms Embedded in Global Structures

    ERIC Educational Resources Information Center

    Giersch, Anne; Glaser, Bronwyn; Pasca, Catherine; Chabloz, Mélanie; Debbané, Martin; Eliez, Stephan

    2014-01-01

    Individuals with 22q11.2 deletion syndrome (22q11.2DS) are impaired at exploring visual information in space; however, not much is known about visual form discrimination in the syndrome. Thirty-five individuals with 22q11.2DS and 41 controls completed a form discrimination task with global forms made up of local elements. Affected individuals…

  9. Interoceptive signals impact visual processing: Cardiac modulation of visual body perception.

    PubMed

    Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf

    2017-09-01

    Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors

    PubMed Central

    Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513

  11. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors.

    PubMed

    Sung, Kyongje; Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.

  12. Reward associations impact both iconic and visual working memory.

    PubMed

    Infanti, Elisa; Hickey, Clayton; Turatto, Massimo

    2015-02-01

    Reward plays a fundamental role in human behavior. A growing number of studies have shown that stimuli associated with reward become salient and attract attention. The aim of the present study was to extend these results into the investigation of iconic memory and visual working memory. In two experiments we asked participants to perform a visual-search task where different colors of the target stimuli were paired with high or low reward. We then tested whether the pre-established feature-reward association affected performance on a subsequent visual memory task, in which no reward was provided. In this test phase participants viewed arrays of 8 objects, one of which had unique color that could match the color associated with reward during the previous visual-search task. A probe appeared at varying intervals after stimulus offset to identify the to-be-reported item. Our results suggest that reward biases the encoding of visual information such that items characterized by a reward-associated feature interfere with mnemonic representations of other items in the test display. These results extend current knowledge regarding the influence of reward on early cognitive processes, suggesting that feature-reward associations automatically interact with the encoding and storage of visual information, both in iconic memory and visual working memory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli.

    PubMed

    Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio

    2016-05-15

    When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Hallucinations Experienced by Visually Impaired: Charles Bonnet Syndrome.

    PubMed

    Pang, Linda

    2016-12-01

    : Charles Bonnet Syndrome is a condition where visual hallucinations occur as a result of damage along the visual pathway. Patients with Charles Bonnet Syndrome maintain partial or full insight that the hallucinations are not real, absence of psychological conditions, and absence of hallucinations affecting other sensory modalities, while maintaining intact intellectual functioning. Charles Bonnet Syndrome has been well documented in neurologic, geriatric medicine, and psychiatric literature, but there is lack of information in optometric and ophthalmologic literature. Therefore, increased awareness of signs and symptoms associated with Charles Bonnet Syndrome is required among practicing clinicians. This review of the literature will also identify other etiologies of visual hallucinations, pathophysiology of Charles Bonnet Syndrome, and effective management strategies.

  15. Hallucinations Experienced by Visually Impaired: Charles Bonnet Syndrome

    PubMed Central

    Pang, Linda

    2016-01-01

    ABSTRACT Charles Bonnet Syndrome is a condition where visual hallucinations occur as a result of damage along the visual pathway. Patients with Charles Bonnet Syndrome maintain partial or full insight that the hallucinations are not real, absence of psychological conditions, and absence of hallucinations affecting other sensory modalities, while maintaining intact intellectual functioning. Charles Bonnet Syndrome has been well documented in neurologic, geriatric medicine, and psychiatric literature, but there is lack of information in optometric and ophthalmologic literature. Therefore, increased awareness of signs and symptoms associated with Charles Bonnet Syndrome is required among practicing clinicians. This review of the literature will also identify other etiologies of visual hallucinations, pathophysiology of Charles Bonnet Syndrome, and effective management strategies. PMID:27529611

  16. Functional interplay of top-down attention with affective codes during visual short-term memory maintenance.

    PubMed

    Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu

    2018-06-01

    Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  18. No evidence for visual context-dependency of olfactory learning in Drosophila

    NASA Astrophysics Data System (ADS)

    Yarali, Ayse; Mayerle, Moritz; Nawroth, Christian; Gerber, Bertram

    2008-08-01

    How is behaviour organised across sensory modalities? Specifically, we ask concerning the fruit fly Drosophila melanogaster how visual context affects olfactory learning and recall and whether information about visual context is getting integrated into olfactory memory. We find that changing visual context between training and test does not deteriorate olfactory memory scores, suggesting that these olfactory memories can drive behaviour despite a mismatch of visual context between training and test. Rather, both the establishment and the recall of olfactory memory are generally facilitated by light. In a follow-up experiment, we find no evidence for learning about combinations of odours and visual context as predictors for reinforcement even after explicit training in a so-called biconditional discrimination task. Thus, a ‘true’ interaction between visual and olfactory modalities is not evident; instead, light seems to influence olfactory learning and recall unspecifically, for example by altering motor activity, alertness or olfactory acuity.

  19. 14 CFR 121.117 - Airports: Required data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and communications aids, and ATC. (b) Each certificate holder conducting supplemental operations must...)Navigational and communications aids. (iv)Construction affecting takeoff, landing, or ground operations. (v)Air... information. (i)Runway visual range measurement equipment. (ii)Prevailing winds under low visibility...

  20. The relation between disparity and velocity signals of rigidly moving objects constrains depth order perception.

    PubMed

    Di Luca, Massimiliano; Domini, Fulvio; Caudek, Corrado

    2007-05-01

    In two experiments, observers were asked to judge the relative depth of a probe and one or two flanker dots. In Experiment 1, we found that such judgments were influenced by the properties of adjacent image regions, that is, by the amount of angular rotation of a surrounding cloud of dots. In Experiment 2, we found that the properties of the adjacent image regions affected the precision of the observers' judgments. With only the probe and the flanker dots presented in isolation, the precision of observers' judgments was much lower than when probe and the flanker dots were surrounded by a rigidly-connected cloud of dots. Conversely, a non-rigid rotation of the surrounding dots was detrimental to the precision of visual performance. These data can be accounted for by the Intrinsic Constraint model [Domini, F., Caudek, C., & Tassinari, H. (2006). Stereo and motion information are not independently processed by the visual system. Vision Research, 46, 1707-1723], which incorporates the mutual constraints relating disparity and motion signals. The present investigation does not show that the rigidity constraint affects the visual interpretation of motion information alone. Rather, our results show that perceptual performance is affected by the linear relation between disparity and velocity signals, when both depth-cues are present and the distal object is, in fact, rigid.

  1. Preterm-associated visual impairment and estimates of retinopathy of prematurity at regional and global levels for 2010

    PubMed Central

    Blencowe, Hannah; Lawn, Joy E.; Vazquez, Thomas; Fielder, Alistair; Gilbert, Clare

    2013-01-01

    Background: Retinopathy of prematurity (ROP) is a leading cause of potentially avoidable childhood blindness worldwide. We estimated ROP burden at the global and regional levels to inform screening and treatment programs, research, and data priorities. Methods: Systematic reviews and meta-analyses were undertaken to estimate the risk of ROP and subsequent visual impairment for surviving preterm babies by level of neonatal care, access to ROP screening, and treatment. A compartmental model was used to estimate ROP cases and numbers of visually impaired survivors. Results: In 2010, an estimated 184,700 (uncertainty range: 169,600–214,500) preterm babies developed any stage of ROP, 20,000 (15,500–27,200) of whom became blind or severely visually impaired from ROP, and a further 12,300 (8,300–18,400) developed mild/moderate visual impairment. Sixty-five percent of those visually impaired from ROP were born in middle-income regions; 6.2% (4.3–8.9%) of all ROP visually impaired infants were born at >32-wk gestation. Visual impairment from other conditions associated with preterm birth will affect larger numbers of survivors. Conclusion: Improved care, including oxygen delivery and monitoring, for preterm babies in all facility settings would reduce the number of babies affected with ROP. Improved data tracking and coverage of locally adapted screening/treatment programs are urgently required. PMID:24366462

  2. Developmental visual perception deficits with no indications of prosopagnosia in a child with abnormal eye movements.

    PubMed

    Gilaie-Dotan, Sharon; Doron, Ravid

    2017-06-01

    Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Designing visual displays and system models for safe reactor operations based on the user`s perspective of the system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown-VanHoozer, S.A.

    Most designers are not schooled in the area of human-interaction psychology and therefore tend to rely on the traditional ergonomic aspects of human factors when designing complex human-interactive workstations related to reactor operations. They do not take into account the differences in user information processing behavior and how these behaviors may affect individual and team performance when accessing visual displays or utilizing system models in process and control room areas. Unfortunately, by ignoring the importance of the integration of the user interface at the information process level, the result can be sub-optimization and inherently error- and failure-prone systems. Therefore, tomore » minimize or eliminate failures in human-interactive systems, it is essential that the designers understand how each user`s processing characteristics affects how the user gathers information, and how the user communicates the information to the designer and other users. A different type of approach in achieving this understanding is Neuro Linguistic Programming (NLP). The material presented in this paper is based on two studies involving the design of visual displays, NLP, and the user`s perspective model of a reactor system. The studies involve the methodology known as NLP, and its use in expanding design choices from the user`s ``model of the world,`` in the areas of virtual reality, workstation design, team structure, decision and learning style patterns, safety operations, pattern recognition, and much, much more.« less

  4. Aging affects the balance between goal-guided and habitual spatial attention.

    PubMed

    Twedell, Emily L; Koutstaal, Wilma; Jiang, Yuhong V

    2017-08-01

    Visual clutter imposes significant challenges to older adults in everyday tasks and often calls on selective processing of relevant information. Previous research has shown that both visual search habits and task goals influence older adults' allocation of spatial attention, but has not examined the relative impact of these two sources of attention when they compete. To examine how aging affects the balance between goal-driven and habitual attention, and to inform our understanding of different attentional subsystems, we tested young and older adults in an adapted visual search task involving a display laid flat on a desk. To induce habitual attention, unbeknownst to participants, the target was more often placed in one quadrant than in the others. All participants rapidly acquired habitual attention toward the high-probability quadrant. We then informed participants where the high-probability quadrant was and instructed them to search that screen location first-but pitted their habit-based, viewer-centered search against this instruction by requiring participants to change their physical position relative to the desk. Both groups prioritized search in the instructed location, but this effect was stronger in young adults than in older adults. In contrast, age did not influence viewer-centered search habits: the two groups showed similar attentional preference for the visual field where the target was most often found before. Aging disrupted goal-guided but not habitual attention. Product, work, and home design for people of all ages--but especially for older individuals--should take into account the strong viewer-centered nature of habitual attention.

  5. The working memory Ponzo illusion: Involuntary integration of visuospatial information stored in visual working memory.

    PubMed

    Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan

    2015-08-01

    Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  7. The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes

    NASA Technical Reports Server (NTRS)

    Remington, R.; Williams, D.

    1984-01-01

    Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.

  8. Development of a Disaster Information Visualization Dashboard: A Case Study of Three Typhoons in Taiwan in 2016

    NASA Astrophysics Data System (ADS)

    Su, Wen-Ray; Tsai, Yuan-Fan; Huang, Kuei-Chin; Hsieh, Ching-En

    2017-04-01

    To facilitate disaster response and enhance the effectiveness of disaster prevention and relief, people and emergency response personnel should be able to rapidly acquire and understand information when disasters occur. However, in existing disaster platforms information is typically presented in text tables, static charts, and maps with points. These formats do not make it easy for users to understand the overall situation. Therefore, this study converts data into human-readable charts by using data visualization techniques, and builds a disaster information dashboard that is concise, attractive and flexible. This information dashboard integrates temporally and spatially correlated data, disaster statistics according to category and county, lists of disasters, and any other relevant information. The graphs are animated and interactive. The dashboard allows users to filter the data according to their needs and thus to assimilate the information more rapidly. In this study, we applied the information dashboard to the analysis of landslides during three typhoon events in 2016: Typhoon Nepartak, Typhoon Meranti and Typhoon Megi. According to the statistical results in the dashboard, the order of frequency of the disaster categories in all three events combined was rock fall, roadbed loss, slope slump, road blockage and debris flow. Disasters occurred mainly in the areas that received the most rainfall. Typhoons Nepartak and Meranti mainly affected Taitung, and Typhoon Megi mainly affected Kaohsiung. The towns Xiulin, Fengbin, Fenglin and Guangfu in Hualian County were all issued with debris flow warnings in all three typhoon events. The disaster information dashboard developed in this study allows the user to rapidly assess the overall disaster situation. It clearly and concisely reveals interactions between time, space and disaster type, and also provides comprehensive details about the disaster. The dashboard provides a foundation for future disaster visualization, since it can combine and present real-time information of various types; as such it will strengthen decision making in disaster prevention management.

  9. Sourcebook of Temporal Factors Affecting Information Transfer from Visual Displays

    DTIC Science & Technology

    1981-06-01

    moving object and take appropriate action? Obviously, these behaviors are affected not only by motion perception but by memory and motor control as...Perf., 1975, 1, 383394. Layton, B. Perceptual Noise and Aging. Psych. Bull., 1975, 82, 875- 883. LeGrand, Y. Light, colour and vision. Ch. 13, time...ligkeit intermittierender lichtreize von der flimmerfreque (Brucke-effekt,,, brightness enchancement ): Unt!rsuchungen bei verschiedener leuchtdichte

  10. Alcohol and disorientation-related responses. II, Nystagmus and "vertigo" during angular acceleration.

    DOT National Transportation Integrated Search

    1971-04-01

    The integrity of the visual and vestibular systems is important in the maintenance of orientation during flight. Although alcohol is known to affect the vestibular system through the development of a positional alcohol nystagmus, information concerni...

  11. Simultaneous Visualization of Different Utility Networks for Disaster Management

    NASA Astrophysics Data System (ADS)

    Semm, S.; Becker, T.; Kolbe, T. H.

    2012-07-01

    Cartographic visualizations of crises are used to create a Common Operational Picture (COP) and enforce Situational Awareness by presenting and representing relevant information. As nearly all crises affect geospatial entities, geo-data representations have to support location-specific decision-making throughout the crises. Since, Operator's attention span and their working memory are limiting factors for the process of getting and interpreting information; the cartographic presentation has to support individuals in coordinating their activities and with handling highly dynamic situations. The Situational Awareness of operators in conjunction with a COP are key aspects of the decision making process and essential for coming to appropriate decisions. Utility networks are one of the most complex and most needed systems within a city. The visualization of utility infrastructure in crisis situations is addressed in this paper. The paper will provide a conceptual approach on how to simplify, aggregate, and visualize multiple utility networks and their components to meet the requirements of the decision-making process and to support Situational Awareness.

  12. Impact of feature saliency on visual category learning.

    PubMed

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.

  13. Impact of feature saliency on visual category learning

    PubMed Central

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220

  14. Optimal Mixtures of Test Types in Paired-Associate Learning (Sensory Information Processing). Final Report.

    ERIC Educational Resources Information Center

    Wolford, George

    Seven experiments were run to determine the precise nature of some of the variables which affect the processing of short-term visual information. In particular, retinal location, report order, processing order, lateral masking, and redundancy were studied along with the nature of the confusion errors which are made in the full report procedure.…

  15. Inhibition to excitation ratio regulates visual system responses and behavior in vivo.

    PubMed

    Shen, Wanhua; McKeown, Caroline R; Demas, James A; Cline, Hollis T

    2011-11-01

    The balance of inhibitory to excitatory (I/E) synaptic inputs is thought to control information processing and behavioral output of the central nervous system. We sought to test the effects of the decreased or increased I/E ratio on visual circuit function and visually guided behavior in Xenopus tadpoles. We selectively decreased inhibitory synaptic transmission in optic tectal neurons by knocking down the γ2 subunit of the GABA(A) receptors (GABA(A)R) using antisense morpholino oligonucleotides or by expressing a peptide corresponding to an intracellular loop of the γ2 subunit, called ICL, which interferes with anchoring GABA(A)R at synapses. Recordings of miniature inhibitory postsynaptic currents (mIPSCs) and miniature excitatory PSCs (mEPSCs) showed that these treatments decreased the frequency of mIPSCs compared with control tectal neurons without affecting mEPSC frequency, resulting in an ∼50% decrease in the ratio of I/E synaptic input. ICL expression and γ2-subunit knockdown also decreased the ratio of optic nerve-evoked synaptic I/E responses. We recorded visually evoked responses from optic tectal neurons, in which the synaptic I/E ratio was decreased. Decreasing the synaptic I/E ratio in tectal neurons increased the variance of first spike latency in response to full-field visual stimulation, increased recurrent activity in the tectal circuit, enlarged spatial receptive fields, and lengthened the temporal integration window. We used the benzodiazepine, diazepam (DZ), to increase inhibitory synaptic activity. DZ increased optic nerve-evoked inhibitory transmission but did not affect evoked excitatory currents, resulting in an increase in the I/E ratio of ∼30%. Increasing the I/E ratio with DZ decreased the variance of first spike latency, decreased spatial receptive field size, and lengthened temporal receptive fields. Sequential recordings of spikes and excitatory and inhibitory synaptic inputs to the same visual stimuli demonstrated that decreasing or increasing the I/E ratio disrupted input/output relations. We assessed the effect of an altered I/E ratio on a visually guided behavior that requires the optic tectum. Increasing and decreasing I/E in tectal neurons blocked the tectally mediated visual avoidance behavior. Because ICL expression, γ2-subunit knockdown, and DZ did not directly affect excitatory synaptic transmission, we interpret the results of our study as evidence that partially decreasing or increasing the ratio of I/E disrupts several measures of visual system information processing and visually guided behavior in an intact vertebrate.

  16. Distinct populations of neurons respond to emotional valence and arousal in the human subthalamic nucleus

    PubMed Central

    Sieger, Tomáš; Serranová, Tereza; Růžička, Filip; Vostatek, Pavel; Wild, Jiří; Šťastná, Daniela; Bonnet, Cecilia; Novák, Daniel; Růžička, Evžen; Urgošík, Dušan; Jech, Robert

    2015-01-01

    Both animal studies and studies using deep brain stimulation in humans have demonstrated the involvement of the subthalamic nucleus (STN) in motivational and emotional processes; however, participation of this nucleus in processing human emotion has not been investigated directly at the single-neuron level. We analyzed the relationship between the neuronal firing from intraoperative microrecordings from the STN during affective picture presentation in patients with Parkinson’s disease (PD) and the affective ratings of emotional valence and arousal performed subsequently. We observed that 17% of neurons responded to emotional valence and arousal of visual stimuli according to individual ratings. The activity of some neurons was related to emotional valence, whereas different neurons responded to arousal. In addition, 14% of neurons responded to visual stimuli. Our results suggest the existence of neurons involved in processing or transmission of visual and emotional information in the human STN, and provide evidence of separate processing of the affective dimensions of valence and arousal at the level of single neurons as well. PMID:25713375

  17. Predictive information speeds up visual awareness in an individuation task by modulating threshold setting, not processing efficiency.

    PubMed

    De Loof, Esther; Van Opstal, Filip; Verguts, Tom

    2016-04-01

    Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Lightness Constancy in Surface Visualization

    PubMed Central

    Szafir, Danielle Albers; Sarikaya, Alper; Gleicher, Michael

    2016-01-01

    Color is a common channel for displaying data in surface visualization, but is affected by the shadows and shading used to convey surface depth and shape. Understanding encoded data in the context of surface structure is critical for effective analysis in a variety of domains, such as in molecular biology. In the physical world, lightness constancy allows people to accurately perceive shadowed colors; however, its effectiveness in complex synthetic environments such as surface visualizations is not well understood. We report a series of crowdsourced and laboratory studies that confirm the existence of lightness constancy effects for molecular surface visualizations using ambient occlusion. We provide empirical evidence of how common visualization design decisions can impact viewers’ abilities to accurately identify encoded surface colors. These findings suggest that lightness constancy aids in understanding color encodings in surface visualization and reveal a correlation between visualization techniques that improve color interpretation in shadow and those that enhance perceptions of surface depth. These results collectively suggest that understanding constancy in practice can inform effective visualization design. PMID:26584495

  19. Health motivation and product design determine consumers' visual attention to nutrition information on food products.

    PubMed

    Visschers, Vivianne H M; Hess, Rebecca; Siegrist, Michael

    2010-07-01

    In the present study we investigated consumers' visual attention to nutrition information on food products using an indirect instrument, an eye tracker. In addition, we looked at whether people with a health motivation focus on nutrition information on food products more than people with a taste motivation. Respondents were instructed to choose one of five cereals for either the kindergarten (health motivation) or the student cafeteria (taste motivation). The eye tracker measured their visual attention during this task. Then respondents completed a short questionnaire. Laboratory of the ETH Zurich, Switzerland. Videos and questionnaires from thirty-two students (seventeen males; mean age 24.91 years) were analysed. Respondents with a health motivation viewed the nutrition information on the food products for longer and more often than respondents with a taste motivation. Health motivation also seemed to stimulate deeper processing of the nutrition information. The student cafeteria group focused primarily on the other information and did this for longer and more often than the health motivation group. Additionally, the package design affected participants' nutrition information search. Two factors appear to influence whether people pay attention to nutrition information on food products: their motivation and the product's design. If the package design does not sufficiently facilitate the localization of nutrition information, health motivation can stimulate consumers to look for nutrition information so that they may make a more deliberate food choice.

  20. Age-related differences in processing visual device and task characteristics when using technical devices.

    PubMed

    Oehl, M; Sutter, C

    2015-05-01

    With aging visual feedback becomes increasingly relevant in action control. Consequently, visual device and task characteristics should more and more affect tool use. Focussing on late working age, the present study aims to investigate age-related differences in processing task irrelevant (display size) and task relevant visual information (task difficulty). Young and middle-aged participants (20-35 and 36-64 years of age, respectively) sat in front of a touch screen with differently sized active touch areas (4″ to 12″) and performed pointing tasks with differing task difficulties (1.8-5 bits). Both display size and age affected pointing performance, but the two variables did not interact and aiming duration moderated both effects. Furthermore, task difficulty affected the pointing durations of middle-aged adults moreso than those of young adults. Again, aiming duration accounted for the variance in the data. The onset of an age-related decline in aiming duration can be clearly located in middle adulthood. Thus, the fine psychomotor ability "aiming" is a moderator and predictor for age-related differences in pointing tasks. The results support a user-specific design for small technical devices with touch interfaces. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. Computation and visualization of uncertainty in surgical navigation.

    PubMed

    Simpson, Amber L; Ma, Burton; Vasarhelyi, Edward M; Borschneck, Dan P; Ellis, Randy E; James Stewart, A

    2014-09-01

    Surgical displays do not show uncertainty information with respect to the position and orientation of instruments. Data is presented as though it were perfect; surgeons unaware of this uncertainty could make critical navigational mistakes. The propagation of uncertainty to the tip of a surgical instrument is described and a novel uncertainty visualization method is proposed. An extensive study with surgeons has examined the effect of uncertainty visualization on surgical performance with pedicle screw insertion, a procedure highly sensitive to uncertain data. It is shown that surgical performance (time to insert screw, degree of breach of pedicle, and rotation error) is not impeded by the additional cognitive burden imposed by uncertainty visualization. Uncertainty can be computed in real time and visualized without adversely affecting surgical performance, and the best method of uncertainty visualization may depend upon the type of navigation display. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Designing a visualization system for hydrological data

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Sven

    2000-02-01

    The field of hydrology is, as any other scientific field, strongly affected by a massive technological evolution. The spread of modern information and communication technology within the last three decades has led to an increased collection, availability and use of spatial and temporal digital hydrological data. In a two-year research period a working group in Muenster applied and developed methods for the visualization of digital hydrological data and the documentation of hydrological models. A low-cost multimedial, hydrological visualization system (HydroVIS) for the Weser river catchment was developed. The research group designed HydroVIS under freeware constraints and tried to show what kind of multimedia visualization techniques can be effectively used with a nonprofit hydrological visualization system. The system's visual components include features such as electronic maps, temporal and nontemporal cartographic animations, the display of geologic profiles, interactive diagrams and hypertext, including photographs and tables.

  3. The Influence of Restricted Visual Feedback on Dribbling Performance in Youth Soccer Players.

    PubMed

    Fransen, Job; Lovell, Thomas W J; Bennett, Kyle J M; Deprez, Dieter; Deconinck, Frederik J A; Lenoir, Matthieu; Coutts, Aaron J

    2017-04-01

    The aim of the current study was to examine the influence of restricted visual feedback using stroboscopic eyewear on the dribbling performance of youth soccer players. Three dribble test conditions were used in a within-subjects design to measure the effect of restricted visual feedback on soccer dribbling performance in 189 youth soccer players (age: 10-18 y) classified as fast, average or slow dribblers. The results showed that limiting visual feedback increased dribble test times across all abilities. Furthermore, the largest performance decrement between stroboscopic and full vision conditions was in fast dribblers, showing that fast dribblers were most affected by reduced visual information. This may be due to a greater dependency on visual feedback at increased speeds, which may limit the ability to maintain continuous control of the ball. These findings may have important implications for the development of soccer dribbling ability.

  4. Effects of visual and verbal interaction on unintentional interpersonal coordination.

    PubMed

    Richardson, Michael J; Marsh, Kerry L; Schmidt, R C

    2005-02-01

    Previous research has demonstrated that people's movements can become unintentionally coordinated during interpersonal interaction. The current study sought to uncover the degree to which visual and verbal (conversation) interaction constrains and organizes the rhythmic limb movements of coactors. Two experiments were conducted in which pairs of participants completed an interpersonal puzzle task while swinging handheld pendulums with instructions that minimized intentional coordination but facilitated either visual or verbal interaction. Cross-spectral analysis revealed a higher degree of coordination for conditions in which the pairs were visually coupled. In contrast, verbal interaction alone was not found to provide a sufficient medium for unintentional coordination to occur, nor did it enhance the unintentional coordination that emerged during visual interaction. The results raise questions concerning differences between visual and verbal informational linkages during interaction and how these differences may affect interpersonal movement production and its coordination.

  5. Seeing the hand while reaching speeds up on-line responses to a sudden change in target position

    PubMed Central

    Reichenbach, Alexandra; Thielscher, Axel; Peer, Angelika; Bülthoff, Heinrich H; Bresciani, Jean-Pierre

    2009-01-01

    Goal-directed movements are executed under the permanent supervision of the central nervous system, which continuously processes sensory afferents and triggers on-line corrections if movement accuracy seems to be compromised. For arm reaching movements, visual information about the hand plays an important role in this supervision, notably improving reaching accuracy. Here, we tested whether visual feedback of the hand affects the latency of on-line responses to an external perturbation when reaching for a visual target. Two types of perturbation were used: visual perturbation consisted in changing the spatial location of the target and kinesthetic perturbation in applying a force step to the reaching arm. For both types of perturbation, the hand trajectory and the electromyographic (EMG) activity of shoulder muscles were analysed to assess whether visual feedback of the hand speeds up on-line corrections. Without visual feedback of the hand, on-line responses to visual perturbation exhibited the longest latency. This latency was reduced by about 10% when visual feedback of the hand was provided. On the other hand, the latency of on-line responses to kinesthetic perturbation was independent of the availability of visual feedback of the hand. In a control experiment, we tested the effect of visual feedback of the hand on visual and kinesthetic two-choice reaction times – for which coordinate transformation is not critical. Two-choice reaction times were never facilitated by visual feedback of the hand. Taken together, our results suggest that visual feedback of the hand speeds up on-line corrections when the position of the visual target with respect to the body must be re-computed during movement execution. This facilitation probably results from the possibility to map hand- and target-related information in a common visual reference frame. PMID:19675067

  6. The role of visual imagery in the retention of information from sentences.

    PubMed

    Drose, G S; Allen, G L

    1994-01-01

    We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.

  7. Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.

    PubMed

    Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R

    2008-03-01

    Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.

  8. Sensory processing patterns predict the integration of information held in visual working memory.

    PubMed

    Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne

    2016-02-01

    Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  10. Emotion and anxiety potentiate the way attention alters visual appearance.

    PubMed

    Barbot, Antoine; Carrasco, Marisa

    2018-04-12

    The ability to swiftly detect and prioritize the processing of relevant information around us is critical for the way we interact with our environment. Selective attention is a key mechanism that serves this purpose, improving performance in numerous visual tasks. Reflexively attending to sudden information helps detect impeding threat or danger, a possible reason why emotion modulates the way selective attention affects perception. For instance, the sudden appearance of a fearful face potentiates the effects of exogenous (involuntary, stimulus-driven) attention on performance. Internal states such as trait anxiety can also modulate the impact of attention on early visual processing. However, attention does not only improve performance; it also alters the way visual information appears to us, e.g. by enhancing perceived contrast. Here we show that emotion potentiates the effects of exogenous attention on both performance and perceived contrast. Moreover, we found that trait anxiety mediates these effects, with stronger influences of attention and emotion in anxious observers. Finally, changes in performance and appearance correlated with each other, likely reflecting common attentional modulations. Altogether, our findings show that emotion and anxiety interact with selective attention to truly alter how we see.

  11. Visual attention in a complex search task differs between honeybees and bumblebees.

    PubMed

    Morawetz, Linde; Spaethe, Johannes

    2012-07-15

    Mechanisms of spatial attention are used when the amount of gathered information exceeds processing capacity. Such mechanisms have been proposed in bees, but have not yet been experimentally demonstrated. We provide evidence that selective attention influences the foraging performance of two social bee species, the honeybee Apis mellifera and the bumblebee Bombus terrestris. Visual search tasks, originally developed for application in human psychology, were adapted for behavioural experiments on bees. We examined the impact of distracting visual information on search performance, which we measured as error rate and decision time. We found that bumblebees were significantly less affected by distracting objects than honeybees. Based on the results, we conclude that the search mechanism in honeybees is serial like, whereas in bumblebees it shows the characteristics of a restricted parallel-like search. Furthermore, the bees differed in their strategy to solve the speed-accuracy trade-off. Whereas bumblebees displayed slow but correct decision-making, honeybees exhibited fast and inaccurate decision-making. We propose two neuronal mechanisms of visual information processing that account for the different responses between honeybees and bumblebees, and we correlate species-specific features of the search behaviour to differences in habitat and life history.

  12. Thinking about the weather: How display salience and knowledge affect performance in a graphic inference task.

    PubMed

    Hegarty, Mary; Canham, Matt S; Fabrikant, Sara I

    2010-01-01

    Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of domain knowledge were investigated by examining performance and eye fixations before and after participants learned relevant meteorological principles. Map design and knowledge interacted such that salience had no effect on performance before participants learned the meteorological principles; however, after learning, participants were more accurate if they viewed maps that made task-relevant information more visually salient. Effects of display design on task performance were somewhat dissociated from effects of display design on eye fixations. The results support a model in which eye fixations are directed primarily by top-down factors (task and domain knowledge). They suggest that good display design facilitates performance not just by guiding where viewers look in a complex display but also by facilitating processing of the visual features that represent task-relevant information at a given display location. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  13. Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma.

    PubMed

    Gangeddula, Viswa; Ranchet, Maud; Akinwuntan, Abiodun E; Bollinger, Kathryn; Devos, Hannes

    2017-01-01

    Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma. Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1), dynamic visual field condition (C2), and dynamic visual field condition with active driving (C3) using an interactive, desktop driving simulator. The number of correct responses (accuracy) and response times on the visual field task were compared between groups and between conditions using Kruskal-Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions. Results: Adding cognitive demand (C2 and C3) to the static visual field test (C1) adversely affected accuracy and response times, in both groups ( p < 0.05). However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1-Q3) 3 (2-6.50) vs. 2 (0.50-2.50); p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2-6) vs. 1 (0.50-2); p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls ( p = 0.02). Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma.

  14. “Distracters” Do Not Always Distract: Visual Working Memory for Angry Faces is Enhanced by Incidental Emotional Words

    PubMed Central

    Jackson, Margaret C.; Linden, David E. J.; Raymond, Jane E.

    2012-01-01

    We are often required to filter out distraction in order to focus on a primary task during which working memory (WM) is engaged. Previous research has shown that negative versus neutral distracters presented during a visual WM maintenance period significantly impair memory for neutral information. However, the contents of WM are often also emotional in nature. The question we address here is how incidental information might impact upon visual WM when both this and the memory items contain emotional information. We presented emotional versus neutral words during the maintenance interval of an emotional visual WM faces task. Participants encoded two angry or happy faces into WM, and several seconds into a 9 s maintenance period a negative, positive, or neutral word was flashed on the screen three times. A single neutral test face was presented for retrieval with a face identity that was either present or absent in the preceding study array. WM for angry face identities was significantly better when an emotional (negative or positive) versus neutral (or no) word was presented. In contrast, WM for happy face identities was not significantly affected by word valence. These findings suggest that the presence of emotion within an intervening stimulus boosts the emotional value of threat-related information maintained in visual WM and thus improves performance. In addition, we show that incidental events that are emotional in nature do not always distract from an ongoing WM task. PMID:23112782

  15. Body posture differentially impacts on visual attention towards tool, graspable, and non-graspable objects.

    PubMed

    Ambrosini, Ettore; Costantini, Marcello

    2017-02-01

    Viewed objects have been shown to afford suitable actions, even in the absence of any intention to act. However, little is known as to whether gaze behavior (i.e., the way we simply look at objects) is sensitive to action afforded by the seen object and how our actual motor possibilities affect this behavior. We recorded participants' eye movements during the observation of tools, graspable and ungraspable objects, while their hands were either freely resting on the table or tied behind their back. The effects of the observed object and hand posture on gaze behavior were measured by comparing the actual fixation distribution with that predicted by 2 widely supported models of visual attention, namely the Graph-Based Visual Saliency and the Adaptive Whitening Salience models. Results showed that saliency models did not accurately predict participants' fixation distributions for tools. Indeed, participants mostly fixated the action-related, functional part of the tools, regardless of its visual saliency. Critically, the restriction of the participants' action possibility led to a significant reduction of this effect and significantly improved the model prediction of the participants' gaze behavior. We suggest, first, that action-relevant object information at least in part guides gaze behavior. Second, postural information interacts with visual information to the generation of priority maps of fixation behavior. We support the view that the kind of information we access from the environment is constrained by our readiness to act. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Languages on the screen: is film comprehension related to the viewers' fluency level and to the language in the subtitles?

    PubMed

    Lavaur, Jean-Marc; Bairstow, Dominique

    2011-12-01

    This research aimed at studying the role of subtitling in film comprehension. It focused on the languages in which the subtitles are written and on the participants' fluency levels in the languages presented in the film. In a preliminary part of the study, the most salient visual and dialogue elements of a short sequence of an English film were extracted by the means of a free recall task after showing two versions of the film (first a silent, then a dubbed-into-French version) to native French speakers. This visual and dialogue information was used in the setting of a questionnaire concerning the understanding of the film presented in the main part of the study, in which other French native speakers with beginner, intermediate, or advanced fluency levels in English were shown one of three versions of the film used in the preliminary part. Respectively, these versions had no subtitles or they included either English or French subtitles. The results indicate a global interaction between all three factors in this study: For the beginners, visual processing dropped from the version without subtitles to that with English subtitles, and even more so if French subtitles were provided, whereas the effect of film version on dialogue comprehension was the reverse. The advanced participants achieved higher comprehension for both types of information with the version without subtitles, and dialogue information processing was always better than visual information processing. The intermediate group similarly processed dialogues in a better way than visual information, but was not affected by film version. These results imply that, depending on the viewers' fluency levels, the language of subtitles can have different effects on movie information processing.

  17. Mental Rotation of Tactical Instruction Displays Affects Information Processing Demand and Execution Accuracy in Basketball

    ERIC Educational Resources Information Center

    Koopmann, Till; Steggemann-Weinrich, Yvonne; Baumeister, Jochen; Krause, Daniel

    2017-01-01

    Purpose: In sports games, coaches often use tactic boards to present tactical instructions during time-outs (e.g., 20 s to 60 s in basketball). Instructions should be presented in a way that enables fast and errorless information processing for the players. The aim of this study was to test the effect of different orientations of visual tactical…

  18. Applying a Geospatial Visualization Based on USSD Messages to Real Time Identification of Epidemiological Risk Areas in Developing Countries: A Case of Study of Paraguay.

    PubMed

    Ochoa, Silvia; Talavera, Julia; Paciello, Julio

    2015-01-01

    The identification of epidemiological risk areas is one of the major problems in public health. Information management strategies are needed to facilitate prevention and control of disease in the affected areas. This paper presents a model to optimize geographical data collection of suspected or confirmed disease occurrences using the Unstructured Supplementary Service Data (USSD) mobile technology, considering its wide adoption even in developing countries such as Paraguay. A Geographic Information System (GIS) is proposed for visualizing potential epidemiological risk areas in real time, that aims to support decision making and to implement prevention or contingency programs for public health.

  19. Visual attention and goal pursuit: deliberative and implemental mindsets affect breadth of attention.

    PubMed

    Büttner, Oliver B; Wieber, Frank; Schulz, Anna Maria; Bayer, Ute C; Florack, Arnd; Gollwitzer, Peter M

    2014-10-01

    Mindset theory suggests that a deliberative mindset entails openness to information in one's environment, whereas an implemental mindset entails filtering of information. We hypothesized that this open- versus closed-mindedness influences individuals' breadth of visual attention. In Studies 1 and 2, we induced an implemental or deliberative mindset, and measured breadth of attention using participants' length estimates of x-winged Müller-Lyer figures. Both studies demonstrate a narrower breadth of attention in the implemental mindset than in the deliberative mindset. In Study 3, we manipulated participants' mindsets and measured the breadth of attention by tracking eye movements during scene perception. Implemental mindset participants focused on foreground objects, whereas deliberative mindset participants attended more evenly to the entire scene. Our findings imply that deliberative versus implemental mindsets already operate at the level of visual attention. © 2014 by the Society for Personality and Social Psychology, Inc.

  20. The Pleasantness of Visual Symmetry: Always, Never or Sometimes

    PubMed Central

    Pecchinenda, Anna; Bertamini, Marco; Makin, Alexis David James; Ruta, Nicole

    2014-01-01

    There is evidence of a preference for visual symmetry. This is true from mate selection in the animal world to the aesthetic appreciation of works of art. It has been proposed that this preference is due to processing fluency, which engenders positive affect. But is visual symmetry pleasant? Evidence is mixed as explicit preferences show that this is the case. In contrast, implicit measures show that visual symmetry does not spontaneously engender positive affect but it depends on participants intentionally assessing visual regularities. In four experiments using variants of the affective priming paradigm, we investigated when visual symmetry engenders positive affect. Findings showed that, when no Stroop-like effects or post-lexical mechanisms enter into play, visual symmetry spontaneously elicits positive affect and results in affective congruence effects. PMID:24658112

  1. Axonal Conduction Delays, Brain State, and Corticogeniculate Communication.

    PubMed

    Stoelzel, Carl R; Bereshpolova, Yulia; Alonso, Jose-Manuel; Swadlow, Harvey A

    2017-06-28

    Thalamocortical conduction times are short, but layer 6 corticothalamic axons display an enormous range of conduction times, some exceeding 40-50 ms. Here, we investigate (1) how axonal conduction times of corticogeniculate (CG) neurons are related to the visual information conveyed to the thalamus, and (2) how alert versus nonalert awake brain states affect visual processing across the spectrum of CG conduction times. In awake female Dutch-Belted rabbits, we found 58% of CG neurons to be visually responsive, and 42% to be unresponsive. All responsive CG neurons had simple, orientation-selective receptive fields, and generated sustained responses to stationary stimuli. CG axonal conduction times were strongly related to modulated firing rates (F1 values) generated by drifting grating stimuli, and their associated interspike interval distributions, suggesting a continuum of visual responsiveness spanning the spectrum of axonal conduction times. CG conduction times were also significantly related to visual response latency, contrast sensitivity (C-50 values), directional selectivity, and optimal stimulus velocity. Increasing alertness did not cause visually unresponsive CG neurons to become responsive and did not change the response linearity (F1/F0 ratios) of visually responsive CG neurons. However, for visually responsive CG neurons, increased alertness nearly doubled the modulated response amplitude to optimal visual stimulation (F1 values), significantly shortened response latency, and dramatically increased response reliability. These effects of alertness were uniform across the broad spectrum of CG axonal conduction times. SIGNIFICANCE STATEMENT Corticothalamic neurons of layer 6 send a dense feedback projection to thalamic nuclei that provide input to sensory neocortex. While sensory information reaches the cortex after brief thalamocortical axonal delays, corticothalamic axons can exhibit conduction delays of <2 ms to 40-50 ms. Here, in the corticogeniculate visual system of awake rabbits, we investigate the functional significance of this axonal diversity, and the effects of shifting alert/nonalert brain states on corticogeniculate processing. We show that axonal conduction times are strongly related to multiple visual response properties, suggesting a continuum of visual responsiveness spanning the spectrum of corticogeniculate axonal conduction times. We also show that transitions between awake brain states powerfully affect corticogeniculate processing, in some ways more strongly than in layer 4. Copyright © 2017 the authors 0270-6474/17/376342-17$15.00/0.

  2. The First Two R's.

    ERIC Educational Resources Information Center

    Tzeng, Ovid J. L.; Wang, William S. Y.

    1983-01-01

    Indicates that the way different languages reduce speech to script affects how visual information is processed in the brain, suggesting that the relation between script and speech underlying all types of writing systems plays an important part in reading behavior. Compares memory performance of native English/Chinese speakers. (JN)

  3. FAR and NEAR Target Dynamic Visual Acuity: A Functional Assessment of Canal and Otolith Performance

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Landsness, Eric C.; Black, F. Owen; Bloomberg, Jacob J.

    2004-01-01

    Upon their return to earth, astronauts experience the effects of vestibular adaptation to microgravity. The postflight changes in vestibular information processing can affect postural and locomotor stability and may lead to oscillopsia during activities of daily living. However, it is likely that time spent in microgravity affects canal and otolith function differently. As a result, the isolated rotational stimuli used in traditional tests of canal function may fail to identify vestibular deficits after spaceflight. Also, the functional consequences of deficits that are identified often remain unknown. In a gaze control task, the relative contributions of the canal and otolith organs are modulated with viewing distance. The ability to stabilize gaze during a perturbation, on visual targets placed at different distances from the head may therefore provide independent insight into the function of this systems. Our goal was to develop a functional measure of gaze control that can also offer independent information about the function of the canal and otolith organs.

  4. How visual timing and form information affect speech and non-speech processing.

    PubMed

    Kim, Jeesun; Davis, Chris

    2014-10-01

    Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. The contribution of disengagement to temporal discriminability.

    PubMed

    Shipstead, Zach; Nespodzany, Ashley

    2018-05-01

    The present study examines the idea that time-based forgetting of outdated information can lead to better memory of currently relevant information. This was done using the visual arrays task, along with a between-subjects manipulation of both the retention interval (1 s vs. 4 s) and the time between two trials (1 s vs. 4 s). Consistent with prior work [Shipstead, Z., & Engle, R. W. (2013). Interference within the focus of attention: Working memory tasks reflect more than temporary maintenance. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 277-289; Experiment 1], longer retention intervals did not lead to diminished memory of currently relevant information. However, we did find that longer periods of time between two trials improved memory for currently relevant information. This replicates findings that indicate proactive interference affects visual arrays performance and extends previous findings to show that reduction of proactive interference can occur in a time-dependent manner.

  6. How visual cues for when to listen aid selective auditory attention.

    PubMed

    Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G

    2012-06-01

    Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.

  7. Material and shape perception based on two types of intensity gradient information

    PubMed Central

    Nishida, Shin'ya

    2018-01-01

    Visual estimation of the material and shape of an object from a single image includes a hard ill-posed computational problem. However, in our daily life we feel we can estimate both reasonably well. The neural computation underlying this ability remains poorly understood. Here we propose that the human visual system uses different aspects of object images to separately estimate the contributions of the material and shape. Specifically, material perception relies mainly on the intensity gradient magnitude information, while shape perception relies mainly on the intensity gradient order information. A clue to this hypothesis was provided by the observation that luminance-histogram manipulation, which changes luminance gradient magnitudes but not the luminance-order map, effectively alters the material appearance but not the shape of an object. In agreement with this observation, we found that the simulated physical material changes do not significantly affect the intensity order information. A series of psychophysical experiments further indicate that human surface shape perception is robust against intensity manipulations provided they do not disturb the intensity order information. In addition, we show that the two types of gradient information can be utilized for the discrimination of albedo changes from highlights. These findings suggest that the visual system relies on these diagnostic image features to estimate physical properties in a distal world. PMID:29702644

  8. Visual function affects prosocial behaviors in older adults.

    PubMed

    Teoli, Dac A; Smith, Merideth D; Leys, Monique J; Jain, Priyanka; Odom, J Vernon

    2016-02-01

    Eye-related pathological conditions such as glaucoma, diabetic retinopathy, and age-related macular degeneration commonly lead to decreased peripheral/central field, decreased visual acuity, and increased functional disability. We sought to answer if relationships exist between measures of visual function and reported prosocial behaviors in an older adult population with eye-related diagnoses. The sample consisted of adults, aged ≥ 60 years old, at an academic hospital's eye institute. Vision ranged from normal to severe impairment. Medical charts determined the visual acuities, ocular disease, duration of disease (DD), and visual fields (VF). Measures of giving help were via validated questionnaires on giving formal support (GFS) and giving informal support; measures of help received were perceived support (PS) and informal support received (ISR). ISR had subscales: tangible support (ISR-T), emotional support (ISR-E), and composite (ISR-C). Visual acuities of the better and worse seeing eyes were converted to LogMAR values. VF information converted to a 4-point rating scale of binocular field loss severity. DD was in years. Among 96 participants (mean age 73.28; range 60-94), stepwise regression indicated a relationship of visual variables to GFS (p < 0.05; Multiple R (2) = 0.1679 with acuity-better eye, VF rating, and DD), PS (p < 0.05; Multiple R (2) = 0.2254 with acuity-better eye), ISR-C (p < 0.05; Multiple R (2) = 0.041 with acuity-better eye), and ISR-T (p < 0.05; Multiple R (2) = 0.1421 with acuity-better eye). The findings suggest eye-related conditions can impact levels and perceptions of support exchanges. Our data reinforces the importance of visual function as an influence on prosocial behavior in older adults.

  9. A Fuzzy-Based Approach for Sensing, Coding and Transmission Configuration of Visual Sensors in Smart City Applications

    PubMed Central

    Costa, Daniel G.; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian

    2017-01-01

    The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field. PMID:28067777

  10. A Fuzzy-Based Approach for Sensing, Coding and Transmission Configuration of Visual Sensors in Smart City Applications.

    PubMed

    Costa, Daniel G; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian

    2017-01-05

    The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.

  11. On the role of spatial phase and phase correlation in vision, illusion, and cognition

    PubMed Central

    Gladilin, Evgeny; Eils, Roland

    2015-01-01

    Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of “cognition by phase correlation.” PMID:25954190

  12. On the role of spatial phase and phase correlation in vision, illusion, and cognition.

    PubMed

    Gladilin, Evgeny; Eils, Roland

    2015-01-01

    Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of "cognition by phase correlation."

  13. Colour, vision and ergonomics.

    PubMed

    Pinheiro, Cristina; da Silva, Fernando Moreira

    2012-01-01

    This paper is based on a research project - Visual Communication and Inclusive Design-Colour, Legibility and Aged Vision, developed at the Faculty of Architecture of Lisbon. The research has the aim of determining specific design principles to be applied to visual communication design (printed) objects, in order to be easily read and perceived by all. This study target group was composed by a selection of socially active individuals, between 55 and 80 years, and we used cultural events posters as objects of study and observation. The main objective is to overlap the study of areas such as colour, vision, older people's colour vision, ergonomics, chromatic contrasts, typography and legibility. In the end we will produce a manual with guidelines and information to apply scientific knowledge into the communication design projectual practice. Within the normal aging process, visual functions gradually decline; the quality of vision worsens, colour vision and contrast sensitivity are also affected. As people's needs change along with age, design should help people and communities, and improve life quality in the present. Applying principles of visually accessible design and ergonomics, the printed design objects, (or interior spaces, urban environments, products, signage and all kinds of visually information) will be effective, easier on everyone's eyes not only for visually impaired people but also for all of us as we age.

  14. Integrating mechanisms of visual guidance in naturalistic language production.

    PubMed

    Coco, Moreno I; Keller, Frank

    2015-05-01

    Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.

  15. Micro-Valences: Perceiving Affective Valence in Everyday Objects

    PubMed Central

    Lebrecht, Sophie; Bar, Moshe; Barrett, Lisa Feldman; Tarr, Michael J.

    2012-01-01

    Perceiving the affective valence of objects influences how we think about and react to the world around us. Conversely, the speed and quality with which we visually recognize objects in a visual scene can vary dramatically depending on that scene’s affective content. Although typical visual scenes contain mostly “everyday” objects, the affect perception in visual objects has been studied using somewhat atypical stimuli with strong affective valences (e.g., guns or roses). Here we explore whether affective valence must be strong or overt to exert an effect on our visual perception. We conclude that everyday objects carry subtle affective valences – “micro-valences” – which are intrinsic to their perceptual representation. PMID:22529828

  16. Producing Curious Affects: Visual Methodology as an Affecting and Conflictual Wunderkammer

    ERIC Educational Resources Information Center

    Staunaes, Dorthe; Kofoed, Jette

    2015-01-01

    Digital video cameras, smartphones, internet and iPads are increasingly used as visual research methods with the purpose of creating an affective corpus of data. Such visual methods are often combined with interviews or observations. Not only are visual methods part of the used research methods, the visual products are used as requisites in…

  17. Augmenting distractor filtering via transcranial magnetic stimulation of the lateral occipital cortex.

    PubMed

    Eštočinová, Jana; Lo Gerfo, Emanuele; Della Libera, Chiara; Chelazzi, Leonardo; Santandrea, Elisa

    2016-11-01

    Visual selective attention (VSA) optimizes perception and behavioral control by enabling efficient selection of relevant information and filtering of distractors. While focusing resources on task-relevant information helps counteract distraction, dedicated filtering mechanisms have recently been demonstrated, allowing neural systems to implement suitable policies for the suppression of potential interference. Limited evidence is presently available concerning the neural underpinnings of these mechanisms, and whether neural circuitry within the visual cortex might play a causal role in their instantiation, a possibility that we directly tested here. In two related experiments, transcranial magnetic stimulation (TMS) was applied over the lateral occipital cortex of healthy humans at different times during the execution of a behavioral task which entailed varying levels of distractor interference and need for attentional engagement. While earlier TMS boosted target selection, stimulation within a restricted time epoch close to (and in the course of) stimulus presentation engendered selective enhancement of distractor suppression, by affecting the ongoing, reactive instantiation of attentional filtering mechanisms required by specific task conditions. The results attest to a causal role of mid-tier ventral visual areas in distractor filtering and offer insights into the mechanisms through which TMS may have affected ongoing neural activity in the stimulated tissue. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Perceptual organization and visual attention.

    PubMed

    Kimchi, Ruth

    2009-01-01

    Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.

  19. Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.

    PubMed

    Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J

    2007-06-01

    The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.

  20. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music

    PubMed Central

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007

  1. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music.

    PubMed

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.

  2. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    PubMed

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma

    PubMed Central

    Gangeddula, Viswa; Ranchet, Maud; Akinwuntan, Abiodun E.; Bollinger, Kathryn; Devos, Hannes

    2017-01-01

    Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma. Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1), dynamic visual field condition (C2), and dynamic visual field condition with active driving (C3) using an interactive, desktop driving simulator. The number of correct responses (accuracy) and response times on the visual field task were compared between groups and between conditions using Kruskal–Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions. Results: Adding cognitive demand (C2 and C3) to the static visual field test (C1) adversely affected accuracy and response times, in both groups (p < 0.05). However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1–Q3) 3 (2–6.50) vs. controls: 2 (0.50–2.50); p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2–6) vs. controls: 1 (0.50–2); p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls (p = 0.02). Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma. PMID:28912712

  4. Flexible Coding of Visual Working Memory Representations during Distraction.

    PubMed

    Lorenc, Elizabeth S; Sreenivasan, Kartik K; Nee, Derek E; Vandenbroucke, Annelinde R E; D'Esposito, Mark

    2018-06-06

    Visual working memory (VWM) recruits a broad network of brain regions, including prefrontal, parietal, and visual cortices. Recent evidence supports a "sensory recruitment" model of VWM, whereby precise visual details are maintained in the same stimulus-selective regions responsible for perception. A key question in evaluating the sensory recruitment model is how VWM representations persist through distracting visual input, given that the early visual areas that putatively represent VWM content are susceptible to interference from visual stimulation.To address this question, we used a functional magnetic resonance imaging inverted encoding model approach to quantitatively assess the effect of distractors on VWM representations in early visual cortex and the intraparietal sulcus (IPS), another region previously implicated in the storage of VWM information. This approach allowed us to reconstruct VWM representations for orientation, both before and after visual interference, and to examine whether oriented distractors systematically biased these representations. In our human participants (both male and female), we found that orientation information was maintained simultaneously in early visual areas and IPS in anticipation of possible distraction, and these representations persisted in the absence of distraction. Importantly, early visual representations were susceptible to interference; VWM orientations reconstructed from visual cortex were significantly biased toward distractors, corresponding to a small attractive bias in behavior. In contrast, IPS representations did not show such a bias. These results provide quantitative insight into the effect of interference on VWM representations, and they suggest a dynamic tradeoff between visual and parietal regions that allows flexible adaptation to task demands in service of VWM. SIGNIFICANCE STATEMENT Despite considerable evidence that stimulus-selective visual regions maintain precise visual information in working memory, it remains unclear how these representations persist through subsequent input. Here, we used quantitative model-based fMRI analyses to reconstruct the contents of working memory and examine the effects of distracting input. Although representations in the early visual areas were systematically biased by distractors, those in the intraparietal sulcus appeared distractor-resistant. In contrast, early visual representations were most reliable in the absence of distraction. These results demonstrate the dynamic, adaptive nature of visual working memory processes, and provide quantitative insight into the ways in which representations can be affected by interference. Further, they suggest that current models of working memory should be revised to incorporate this flexibility. Copyright © 2018 the authors 0270-6474/18/385267-10$15.00/0.

  5. The development of individuation in autism

    PubMed Central

    O'Hearn, Kirsten; Franconeri, Steven; Wright, Catherine; Minshew, Nancy; Luna, Beatriz

    2012-01-01

    Evidence suggests that people with autism use holistic information differently than typical adults. The current studies examine this possibility by investigating how core visual processes that contribute to holistic processing – individuation and element grouping – develop in participants with autism and typically developing (TD) participants matched for age, IQ and gender. Individuation refers to the ability to `see' up to 4 elements simultaneously; grouping these elements can change the number of elements that are rapidly apprehended. We examined these core processes using two well-established paradigms, rapid enumeration and multiple object tracking (MOT). In both tasks, a performance limit of about 4 elements in adulthood is thought to reflect individuation capacity. Participants with autism has a smaller individuation capacity than TD controls, regardless of whether they were enumerating static elements or tracking moving ones. To manipulate holistic information and individuation performance, we grouped the elements into a design or had elements move together. Participants with autism were affected to a similar degree as TD participants by the holistic information, whether the manipulation helped or hurt performance, consistent with evidence that some types of gestalt/grouping information are processed typically in autism. There was substantial development in autism from childhood to adolescence, but not from adolescence to adulthood, a pattern distinct from TD participants. These results provide important information about core visual processes in autism, as well as insight into the architecture of vision (e.g., individuation appears distinct from visual strengths in autism, such as visual search, despite similarities). PMID:22963232

  6. Naturalistic distraction and driving safety in older drivers.

    PubMed

    Aksan, Nazan; Dawson, Jeffrey D; Emerson, Jamie L; Yu, Lixi; Uc, Ergun Y; Anderson, Steven W; Rizzo, Matthew

    2013-08-01

    In this study, we aimed to quantify and compare performance of middle-aged and older drivers during a naturalistic distraction paradigm (visual search for roadside targets) and to predict older drivers performance given functioning in visual, motor, and cognitive domains. Distracted driving can imperil healthy adults and may disproportionally affect the safety of older drivers with visual, motor, and cognitive decline. A total of 203 drivers, 120 healthy older (61 men and 59 women, ages 65 years and older) and 83 middle-aged drivers (38 men and 45 women, ages 40 to 64 years), participated in an on-road test in an instrumented vehicle. Outcome measures included performance in roadside target identification (traffic signs and restaurants) and concurrent driver safety. Differences in visual, motor, and cognitive functioning served as predictors. Older drivers identified fewer landmarks and drove slower but committed more safety errors than did middle-aged drivers. Greater familiarity with local roads benefited performance of middle-aged but not older drivers.Visual cognition predicted both traffic sign identification and safety errors, and executive function predicted traffic sign identification over and above vision. Older adults are susceptible to driving safety errors while distracted by common secondary visual search tasks that are inherent to driving. The findings underscore that age-related cognitive decline affects older drivers' management of driving tasks at multiple levels and can help inform the design of on-road tests and interventions for older drivers.

  7. Assessing the Effect of Early Visual Cortex Transcranial Magnetic Stimulation on Working Memory Consolidation.

    PubMed

    van Lamsweerde, Amanda E; Johnson, Jeffrey S

    2017-07-01

    Maintaining visual working memory (VWM) representations recruits a network of brain regions, including the frontal, posterior parietal, and occipital cortices; however, it is unclear to what extent the occipital cortex is engaged in VWM after sensory encoding is completed. Noninvasive brain stimulation data show that stimulation of this region can affect working memory (WM) during the early consolidation time period, but it remains unclear whether it does so by influencing the number of items that are stored or their precision. In this study, we investigated whether single-pulse transcranial magnetic stimulation (spTMS) to the occipital cortex during VWM consolidation affects the quantity or quality of VWM representations. In three experiments, we disrupted VWM consolidation with either a visual mask or spTMS to retinotopic early visual cortex. We found robust masking effects on the quantity of VWM representations up to 200 msec poststimulus offset and smaller, more variable effects on WM quality. Similarly, spTMS decreased the quantity of VWM representations, but only when it was applied immediately following stimulus offset. Like visual masks, spTMS also produced small and variable effects on WM precision. The disruptive effects of both masks and TMS were greatly reduced or entirely absent within 200 msec of stimulus offset. However, there was a reduction in swap rate across all time intervals, which may indicate a sustained role of the early visual cortex in maintaining spatial information.

  8. Infants learn better from left to right: a directional bias in infants' sequence learning.

    PubMed

    Bulf, Hermann; de Hevia, Maria Dolores; Gariboldi, Valeria; Macchi Cassia, Viola

    2017-05-26

    A wealth of studies show that human adults map ordered information onto a directional spatial continuum. We asked whether mapping ordinal information into a directional space constitutes an early predisposition, already functional prior to the acquisition of symbolic knowledge and language. While it is known that preverbal infants represent numerical order along a left-to-right spatial continuum, no studies have investigated yet whether infants, like adults, organize any kind of ordinal information onto a directional space. We investigated whether 7-month-olds' ability to learn high-order rule-like patterns from visual sequences of geometric shapes was affected by the spatial orientation of the sequences (left-to-right vs. right-to-left). Results showed that infants readily learn rule-like patterns when visual sequences were presented from left to right, but not when presented from right to left. This result provides evidence that spatial orientation critically determines preverbal infants' ability to perceive and learn ordered information in visual sequences, opening to the idea that a left-to-right spatially organized mental representation of ordered dimensions might be rooted in biologically-determined constraints on human brain development.

  9. The role of peripheral vision in saccade planning: learning from people with tunnel vision.

    PubMed

    Luo, Gang; Vargas-Martin, Fernando; Peli, Eli

    2008-12-22

    Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7 degrees-16 degrees) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n = 9). In the walking experiment, the patients (n = 5) and normal controls (n = 3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the large extent of the top-down mechanism influence on eye movement control.

  10. Role of peripheral vision in saccade planning: Learning from people with tunnel vision

    PubMed Central

    Luo, Gang; Vargas-Martin, Fernando; Peli, Eli

    2008-01-01

    Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7°–16°) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n=9). In the walking experiment, the patients (n=5) and normal controls (n=3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the extent of the top-down mechanism influence on eye movement control. PMID:19146326

  11. Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domains.

    PubMed

    Sklar, A E; Sarter, N B

    1999-12-01

    Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.

  12. Relationship Between Auditory Context and Visual Distance Perception: Effect of Musical Expertise in the Ability to Translate Reverberation Cues Into Room-Size Perception.

    PubMed

    Etchemendy, Pablo E; Spiousas, Ignacio; Vergara, Ramiro

    2018-01-01

    In a recently published work by our group [ Scientific Reports, 7, 7189 (2017)], we performed experiments of visual distance perception in two dark rooms with extremely different reverberation times: one anechoic ( T ∼ 0.12 s) and the other reverberant ( T ∼ 4 s). The perceived distance of the targets was systematically greater in the reverberant room when contrasted to the anechoic chamber. Participants also provided auditorily perceived room-size ratings which were greater for the reverberant room. Our hypothesis was that distance estimates are affected by room size, resulting in farther responses for the room perceived larger. Of much importance to the task was the subjects' ability to infer room size from reverberation. In this article, we report a postanalysis showing that participants having musical expertise were better able to extract and translate reverberation cues into room-size information than nonmusicians. However, the degree to which musical expertise affects visual distance estimates remains unclear.

  13. Learning prosthetic vision: a virtual-reality study.

    PubMed

    Chen, Spencer C; Hallum, Luke E; Lovell, Nigel H; Suaning, Gregg J

    2005-09-01

    Acceptance of prosthetic vision will be heavily dependent on the ability of recipients to form useful information from such vision. Training strategies to accelerate learning and maximize visual comprehension would need to be designed in the light of the factors affecting human learning under prosthetic vision. Some of these potential factors were examined in a visual acuity study using the Landolt C optotype under virtual-reality simulation of prosthetic vision. Fifteen normally sighted subjects were tested for 10-20 sessions. Potential learning factors were tested at p < 0.05 with regression models. Learning was most evident across-sessions, though 17% of sessions did express significant within-session trends. Learning was highly concentrated toward a critical range of optotype sizes, and subjects were less capable in identifying the closed optotype (a Landolt C with no gap, forming a closed annulus). Training for implant recipients should target these critical sizes and the closed optotype to extend the limit of visual comprehension. Although there was no evidence that image processing affected overall learning, subjects showed varying personal preferences.

  14. Retention interval affects visual short-term memory encoding.

    PubMed

    Bankó, Eva M; Vidnyánszky, Zoltán

    2010-03-01

    Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.

  15. Visual long-term memory and change blindness: Different effects of pre- and post-change information on one-shot change detection using meaningless geometric objects.

    PubMed

    Nishiyama, Megumi; Kawaguchi, Jun

    2014-11-01

    To clarify the relationship between visual long-term memory (VLTM) and online visual processing, we investigated whether and how VLTM involuntarily affects the performance of a one-shot change detection task using images consisting of six meaningless geometric objects. In the study phase, participants observed pre-change (Experiment 1), post-change (Experiment 2), or both pre- and post-change (Experiment 3) images appearing in the subsequent change detection phase. In the change detection phase, one object always changed between pre- and post-change images and participants reported which object was changed. Results showed that VLTM of pre-change images enhanced the performance of change detection, while that of post-change images decreased accuracy. Prior exposure to both pre- and post-change images did not influence performance. These results indicate that pre-change information plays an important role in change detection, and that information in VLTM related to the current task does not always have a positive effect on performance. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. I see/hear what you mean: semantic activation in visual word recognition depends on perceptual attention.

    PubMed

    Connell, Louise; Lynott, Dermot

    2014-04-01

    How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.

  17. Primary Visual Cortex Represents the Difference Between Past and Present

    PubMed Central

    Nortmann, Nora; Rekauzke, Sascha; Onat, Selim; König, Peter; Jancke, Dirk

    2015-01-01

    The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream. PMID:24343889

  18. Indoor Spatial Updating with Reduced Visual Information

    PubMed Central

    Legge, Gordon E.; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M.

    2016-01-01

    Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment. PMID:26943674

  19. Indoor Spatial Updating with Reduced Visual Information.

    PubMed

    Legge, Gordon E; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M

    2016-01-01

    Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.

  20. To what extent do Gestalt grouping principles influence tactile perception?

    PubMed

    Gallace, Alberto; Spence, Charles

    2011-07-01

    Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.

  1. Different effects of executive and visuospatial working memory on visual consciousness.

    PubMed

    De Loof, Esther; Poppe, Louise; Cleeremans, Axel; Gevers, Wim; Van Opstal, Filip

    2015-11-01

    Consciousness and working memory are two widely studied cognitive phenomena. Although they have been closely tied on a theoretical and neural level, empirical work that investigates their relation is largely lacking. In this study, the relationship between visual consciousness and different working memory components is investigated by using a dual-task paradigm. More specifically, while participants were performing a visual detection task to measure their visual awareness threshold, they had to concurrently perform either an executive or visuospatial working memory task. We hypothesized that visual consciousness would be hindered depending on the type and the size of the load in working memory. Results showed that maintaining visuospatial content in working memory hinders visual awareness, irrespective of the amount of information maintained. By contrast, the detection threshold was progressively affected under increasing executive load. Interestingly, increasing executive load had a generic effect on detection speed, calling into question whether its obstructing effect is specific to the visual awareness threshold. Together, these results indicate that visual consciousness depends differently on executive and visuospatial working memory.

  2. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  3. Information processing capacity while wearing personal protective eyewear.

    PubMed

    Wade, Chip; Davis, Jerry; Marzilli, Thomas S; Weimar, Wendi H

    2006-08-15

    It is difficult to overemphasize the function vision plays in information processing, specifically in maintaining postural control. Vision appears to be an immediate, effortless event; suggesting that eyes need only to be open to employ the visual information provided by the environment. This study is focused on investigating the effect of Occupational Safety and Health Administration regulated personal protective eyewear (29 CFR 1910.133) on physiological and cognitive factors associated with information processing capabilities. Twenty-one college students between the ages of 19 and 25 years were randomly tested in each of three eyewear conditions (control, new and artificially aged) on an inclined and horizontal support surface for auditory and visual stimulus reaction time. Data collection trials consisted of 50 randomly selected (25 auditory, 25 visual) stimuli over a 10-min surface-eyewear condition trial. Auditory stimulus reaction time was significantly affected by the surface by eyewear interaction (F2,40 = 7.4; p < 0.05). Similarly, analysis revealed a significant surface by eyewear interaction in reaction time following the visual stimulus (F2,40 = 21.7; p < 0.05). The current findings do not trivialize the importance of personal protective eyewear usage in an occupational setting; rather, they suggest the value of future research focused on the effect that personal protective eyewear has on the physiological, cognitive and biomechanical contributions to postural control. These findings suggest that while personal protective eyewear may serve to protect an individual from eye injury, an individual's use of such personal protective eyewear may have deleterious effects on sensory information associated with information processing and postural control.

  4. Focus of Attention and Choice of Text Modality in Multimedia Learning

    ERIC Educational Resources Information Center

    Schnotz, Wolfgang; Mengelkamp, Christoph; Baadte, Christiane; Hauck, Georg

    2014-01-01

    The term "modality effect" in multimedia learning means that students learn better from pictures combined with spoken rather than written text. The most prominent explanations refer to the split attention between visual text reading and picture observation which could affect transfer of information into working memory, maintenance of…

  5. Multisensory Integration Affects Visuo-Spatial Working Memory

    ERIC Educational Resources Information Center

    Botta, Fabiano; Santangelo, Valerio; Raffone, Antonino; Sanabria, Daniel; Lupianez, Juan; Belardinelli, Marta Olivetti

    2011-01-01

    In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial…

  6. An Analysis of Organizational Approaches to Online Course Structures

    ERIC Educational Resources Information Center

    Lee, Cheng-Yuan; Dickerson, Jeremy; Winslow, Joe

    2012-01-01

    The structure of an online course, including the navigational interface, visual design of materials and information, as well as the communication tools to facilitate learning, can affect students, instructors, programs and educational organizations in various ways. This paper examines online course structural issues derived from previous research…

  7. Aesthetics, Usefulness and Performance in User--Search-Engine Interaction

    ERIC Educational Resources Information Center

    Katz, Adi

    2010-01-01

    Issues of visual appeal have become an integral part of designing interactive systems. Interface aesthetics may form users' attitudes towards computer applications and information technology. Aesthetics can affect user satisfaction, and influence their willingness to buy or adopt a system. This study follows previous studies that found that users…

  8. Retinal ganglion cells in diabetes

    PubMed Central

    Kern, Timothy S; Barber, Alistair J

    2008-01-01

    Diabetic retinopathy has long been recognized as a vascular disease that develops in most patients, and it was believed that the visual dysfunction that develops in some diabetics was due to the vascular lesions used to characterize the disease. It is becoming increasingly clear that neuronal cells of the retina also are affected by diabetes, resulting in dysfunction and even degeneration of some neuronal cells. Retinal ganglion cells (RGCs) are the best studied of the retinal neurons with respect to the effect of diabetes. Although investigations are providing new information about RGCs in diabetes, including therapies to inhibit the neurodegeneration, critical information about the function, anatomy and response properties of these cells is yet needed to understand the relationship between RGC changes and visual dysfunction in diabetes. PMID:18565995

  9. The perception of surface layout during low level flight

    NASA Technical Reports Server (NTRS)

    Perrone, John A.

    1991-01-01

    Although it is fairly well established that information about surface layout can be gained from motion cues, it is not so clear as to what information humans can use and what specific information they should be provided. Theoretical analyses tell us that the information is in the stimulus. It will take more experiments to verify that this information can be used by humans to extract surface layout from the 2D velocity flow field. The visual motion factors that can affect the pilot's ability to control an aircraft and to infer the layout of the terrain ahead are discussed.

  10. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  11. Re-entrant Projections Modulate Visual Cortex in Affective Perception: Evidence From Granger Causality Analysis

    PubMed Central

    Keil, Andreas; Sabatinelli, Dean; Ding, Mingzhou; Lang, Peter J.; Ihssen, Niklas; Heim, Sabine

    2013-01-01

    Re-entrant modulation of visual cortex has been suggested as a critical process for enhancing perception of emotionally arousing visual stimuli. This study explores how the time information inherent in large-scale electrocortical measures can be used to examine the functional relationships among the structures involved in emotional perception. Granger causality analysis was conducted on steady-state visual evoked potentials elicited by emotionally arousing pictures flickering at a rate of 10 Hz. This procedure allows one to examine the direction of neural connections. Participants viewed pictures that varied in emotional content, depicting people in neutral contexts, erotica, or interpersonal attack scenes. Results demonstrated increased coupling between visual and cortical areas when viewing emotionally arousing content. Specifically, intraparietal to inferotemporal and precuneus to calcarine connections were stronger for emotionally arousing picture content. Thus, we provide evidence for re-entrant signal flow during emotional perception, which originates from higher tiers and enters lower tiers of visual cortex. PMID:18095279

  12. Implications of differences of echoic and iconic memory for the design of multimodal displays

    NASA Astrophysics Data System (ADS)

    Glaser, Daniel Shields

    It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.

  13. The Puzzle of Visual Development: Behavior and Neural Limits.

    PubMed

    Kiorpes, Lynne

    2016-11-09

    The development of visual function takes place over many months or years in primate infants. Visual sensitivity is very poor near birth and improves over different times courses for different visual functions. The neural mechanisms that underlie these processes are not well understood despite many decades of research. The puzzle arises because research into the factors that limit visual function in infants has found surprisingly mature neural organization and adult-like receptive field properties in very young infants. The high degree of visual plasticity that has been documented during the sensitive period in young children and animals leaves the brain vulnerable to abnormal visual experience. Abnormal visual experience during the sensitive period can lead to amblyopia, a developmental disorder of vision affecting ∼3% of children. This review provides a historical perspective on research into visual development and the disorder amblyopia. The mismatch between the status of the primary visual cortex and visual behavior, both during visual development and in amblyopia, is discussed, and several potential resolutions are considered. It seems likely that extrastriate visual areas further along the visual pathways may set important limits on visual function and show greater vulnerability to abnormal visual experience. Analyses based on multiunit, population activity may provide useful representations of the information being fed forward from primary visual cortex to extrastriate processing areas and to the motor output. Copyright © 2016 the authors 0270-6474/16/3611384-10$15.00/0.

  14. Multimodal cues provide redundant information for bumblebees when the stimulus is visually salient, but facilitate red target detection in a naturalistic background

    PubMed Central

    Corcobado, Guadalupe; Trillo, Alejandro

    2017-01-01

    Our understanding of how floral visitors integrate visual and olfactory cues when seeking food, and how background complexity affects flower detection is limited. Here, we aimed to understand the use of visual and olfactory information for bumblebees (Bombus terrestris terrestris L.) when seeking flowers in a visually complex background. To explore this issue, we first evaluated the effect of flower colour (red and blue), size (8, 16 and 32 mm), scent (presence or absence) and the amount of training on the foraging strategy of bumblebees (accuracy, search time and flight behaviour), considering the visual complexity of our background, to later explore whether experienced bumblebees, previously trained in the presence of scent, can recall and make use of odour information when foraging in the presence of novel visual stimuli carrying a familiar scent. Of all the variables analysed, flower colour had the strongest effect on the foraging strategy. Bumblebees searching for blue flowers were more accurate, flew faster, followed more direct paths between flowers and needed less time to find them, than bumblebees searching for red flowers. In turn, training and the presence of odour helped bees to find inconspicuous (red) flowers. When bees foraged on red flowers, search time increased with flower size; but search time was independent of flower size when bees foraged on blue flowers. Previous experience with floral scent enhances the capacity of detection of a novel colour carrying a familiar scent, probably by elemental association influencing attention. PMID:28898287

  15. Spinal cord injury affects the interplay between visual and sensorimotor representations of the body

    PubMed Central

    Ionta, Silvio; Villiger, Michael; Jutzeler, Catherine R; Freund, Patrick; Curt, Armin; Gassert, Roger

    2016-01-01

    The brain integrates multiple sensory inputs, including somatosensory and visual inputs, to produce a representation of the body. Spinal cord injury (SCI) interrupts the communication between brain and body and the effects of this deafferentation on body representation are poorly understood. We investigated whether the relative weight of somatosensory and visual frames of reference for body representation is altered in individuals with incomplete or complete SCI (affecting lower limbs’ somatosensation), with respect to controls. To study the influence of afferent somatosensory information on body representation, participants verbally judged the laterality of rotated images of feet, hands, and whole-bodies (mental rotation task) in two different postures (participants’ body parts were hidden from view). We found that (i) complete SCI disrupts the influence of postural changes on the representation of the deafferented body parts (feet, but not hands) and (ii) regardless of posture, whole-body representation progressively deteriorates proportionally to SCI completeness. These results demonstrate that the cortical representation of the body is dynamic, responsive, and adaptable to contingent conditions, in that the role of somatosensation is altered and partially compensated with a change in the relative weight of somatosensory versus visual bodily representations. PMID:26842303

  16. Transcranial Electrical Stimulation over Dorsolateral Prefrontal Cortex Modulates Processing of Social Cognitive and Affective Information.

    PubMed

    Conson, Massimiliano; Errico, Domenico; Mazzarella, Elisabetta; Giordano, Marianna; Grossi, Dario; Trojano, Luigi

    2015-01-01

    Recent neurofunctional studies suggested that lateral prefrontal cortex is a domain-general cognitive control area modulating computation of social information. Neuropsychological evidence reported dissociations between cognitive and affective components of social cognition. Here, we tested whether performance on social cognitive and affective tasks can be modulated by transcranial direct current stimulation (tDCS) over dorsolateral prefrontal cortex (DLPFC). To this aim, we compared the effects of tDCS on explicit recognition of emotional facial expressions (affective task), and on one cognitive task assessing the ability to adopt another person's visual perspective. In a randomized, cross-over design, male and female healthy participants performed the two experimental tasks after bi-hemispheric tDCS (sham, left anodal/right cathodal, and right anodal/left cathodal) applied over DLPFC. Results showed that only in male participants explicit recognition of fearful facial expressions was significantly faster after anodal right/cathodal left stimulation with respect to anodal left/cathodal right and sham stimulations. In the visual perspective taking task, instead, anodal right/cathodal left stimulation negatively affected both male and female participants' tendency to adopt another's point of view. These findings demonstrated that concurrent facilitation of right and inhibition of left lateral prefrontal cortex can speed-up males' responses to threatening faces whereas it interferes with the ability to adopt another's viewpoint independently from gender. Thus, stimulation of cognitive control areas can lead to different effects on social cognitive skills depending on the affective vs. cognitive nature of the task, and on the gender-related differences in neural organization of emotion processing.

  17. Differential effect of visual masking in perceptual categorization.

    PubMed

    Hélie, Sébastien; Cousineau, Denis

    2015-06-01

    This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).

  18. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  19. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  20. Disaster Emergency Rapid Assessment Based on Remote Sensing and Background Data

    NASA Astrophysics Data System (ADS)

    Han, X.; Wu, J.

    2018-04-01

    The period from starting to the stable conditions is an important stage of disaster development. In addition to collecting and reporting information on disaster situations, remote sensing images by satellites and drones and monitoring results from disaster-stricken areas should be obtained. Fusion of multi-source background data such as population, geography and topography, and remote sensing monitoring information can be used in geographic information system analysis to quickly and objectively assess the disaster information. According to the characteristics of different hazards, the models and methods driven by the rapid assessment of mission requirements are tested and screened. Based on remote sensing images, the features of exposures quickly determine disaster-affected areas and intensity levels, and extract key disaster information about affected hospitals and schools as well as cultivated land and crops, and make decisions after emergency response with visual assessment results.

  1. Multiplicative and additive modulation of neuronal tuning with population activity affects encoded information

    PubMed Central

    Arandia-Romero, Iñigo; Tanabe, Seiji; Drugowitsch, Jan; Kohn, Adam; Moreno-Bote, Rubén

    2016-01-01

    Numerous studies have shown that neuronal responses are modulated by stimulus properties, and also by the state of the local network. However, little is known about how activity fluctuations of neuronal populations modulate the sensory tuning of cells and affect their encoded information. We found that fluctuations in ongoing and stimulus-evoked population activity in primate visual cortex modulate the tuning of neurons in a multiplicative and additive manner. While distributed on a continuum, neurons with stronger multiplicative effects tended to have less additive modulation, and vice versa. The information encoded by multiplicatively-modulated neurons increased with greater population activity, while that of additively-modulated neurons decreased. These effects offset each other, so that population activity had little effect on total information. Our results thus suggest that intrinsic activity fluctuations may act as a `traffic light' that determines which subset of neurons are most informative. PMID:26924437

  2. Visual affective classification by combining visual and text features.

    PubMed

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.

  3. Visual affective classification by combining visual and text features

    PubMed Central

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566

  4. Iconic Memories Die a Sudden Death.

    PubMed

    Pratte, Michael S

    2018-06-01

    Iconic memory is characterized by its large storage capacity and brief storage duration, whereas visual working memory is characterized by its small storage capacity. The limited information stored in working memory is often modeled as an all-or-none process in which studied information is either successfully stored or lost completely. This view raises a simple question: If almost all viewed information is stored in iconic memory, yet one second later most of it is completely absent from working memory, what happened to it? Here, I characterized how the precision and capacity of iconic memory changed over time and observed a clear dissociation: Iconic memory suffered from a complete loss of visual items, while the precision of items retained in memory was only marginally affected by the passage of time. These results provide new evidence for the discrete-capacity view of working memory and a new characterization of iconic memory decay.

  5. Effect of study context on item recollection.

    PubMed

    Skinner, Erin I; Fernandes, Myra A

    2010-07-01

    We examined how visual context information provided during encoding, and unrelated to the target word, affected later recollection for words presented alone using a remember-know paradigm. Experiments 1A and 1B showed that participants had better overall memory-specifically, recollection-for words studied with pictures of intact faces than for words studied with pictures of scrambled or inverted faces. Experiment 2 replicated these results and showed that recollection was higher for words studied with pictures of faces than when no image accompanied the study word. In Experiment 3 participants showed equivalent memory for words studied with unique faces as for those studied with a repeatedly presented face. Results suggest that recollection benefits when visual context information high in meaningful content accompanies study words and that this benefit is not related to the uniqueness of the context. We suggest that participants use elaborative processes to integrate item and meaningful contexts into ensemble information, improving subsequent item recollection.

  6. Semantic bifurcated importance field visualization

    NASA Astrophysics Data System (ADS)

    Lindahl, Eric; Petrov, Plamen

    2007-04-01

    While there are many good ways to map sensual reality to two dimensional displays, mapping non-physical and possibilistic information can be challenging. The advent of faster-than-real-time systems allow the predictive and possibilistic exploration of important factors that can affect the decision maker. Visualizing a compressed picture of the past and possible factors can assist the decision maker summarizing information in a cognitive based model thereby reducing clutter and perhaps related decision times. Our proposed semantic bifurcated importance field visualization uses saccadic eye motion models to partition the display into a possibilistic and sensed data vertically and spatial and semantic data horizontally. Saccadic eye movement precedes and prepares decision makers before nearly every directed action. Cognitive models for saccadic eye movement show that people prefer lateral to vertical saccadic movement. Studies have suggested that saccades may be coupled to momentary problem solving strategies. Also, the central 1.5 degrees of the visual field represents 100 times greater resolution that then peripheral field so concentrating factors can reduce unnecessary saccades. By packing information according to saccadic models, we can relate important decision factors reduce factor dimensionality and present the dense summary dimensions of semantic and importance. Inter and intra ballistics of the SBIFV provide important clues on how semantic packing assists in decision making. Future directions of SBIFV are to make the visualization reactive and conformal to saccades specializing targets to ballistics, such as dynamically filtering and highlighting verbal targets for left saccades and spatial targets for right saccades.

  7. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. Are visual peripheries forever young?

    PubMed

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  9. Impact of visual impairment on the lives of young adults in the Netherlands: a concept-mapping approach.

    PubMed

    Elsman, Ellen Bernadette Maria; van Rens, Gerardus Hermanus Maria Bartholomeus; van Nispen, Ruth Marie Antoinette

    2017-12-01

    While the impact of visual impairments on specific aspects of young adults' lives is well recognised, a systematic understanding of its impact on all life aspects is lacking. This study aims to provide an overview of life aspects affected by visual impairment in young adults (aged 18-25 years) using a concept-mapping approach. Visually impaired young adults (n = 22) and rehabilitation professionals (n = 16) participated in online concept-mapping workshops (brainstorm procedure), to explore how having a visual impairment influences the lives of young adults. Statements were categorised based on similarity and importance. Using multidimensional scaling, concept maps were produced and interpreted. A total of 59 and 260 statements were generated by young adults and professionals, respectively, resulting in 99 individual statements after checking and deduplication. The combined concept map revealed 11 clusters: work, study, information and regulations, social skills, living independently, computer, social relationships, sport and activities, mobility, leisure time, and hobby. The concept maps provided useful insight into activities influenced by visual impairments in young adults, which can be used by rehabilitation centres to improve their services. This might help in goal setting, rehabilitation referral and successful transition to adult life, ultimately increasing participation and quality of life. Implications for rehabilitation Having a visual impairment affects various life-aspects related to participation, including activities related to work, study, social skills and relationships, activities of daily living, leisure time and mobility. Concept-mapping helped to identify the life aspects affected by low vision, and quantify these aspects in terms of importance according to young adults and low vision rehabilitation professionals. Low vision rehabilitation centres should focus on all life aspects found in this study when identifying the needs of young adults, as this might aid goal setting and rehabilitation referral, ultimately leading to more successful transitions, better participation and quality of life.

  10. Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions

    PubMed Central

    Morgan, Helen M.; Jackson, Margaret C.; van Koningsbruggen, Martijn G.; Shapiro, Kimron L.; Linden, David E.J.

    2013-01-01

    In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. PMID:22483548

  11. Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions.

    PubMed

    Morgan, Helen M; Jackson, Margaret C; van Koningsbruggen, Martijn G; Shapiro, Kimron L; Linden, David E J

    2013-03-01

    In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Auditory and visual interhemispheric communication in musicians and non-musicians.

    PubMed

    Woelfle, Rebecca; Grahn, Jessica A

    2013-01-01

    The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.

  13. Vision after 53 years of blindness.

    PubMed

    Sikl, Radovan; Simecček, Michal; Porubanová-Norquist, Michaela; Bezdíček, Ondřej; Kremláček, Jan; Stodůlka, Pavel; Fine, Ione; Ostrovsky, Yuri

    2013-01-01

    Several studies have shown that visual recovery after blindness that occurs early in life is never complete. The current study investigated whether an extremely long period of blindness might also cause a permanent impairment of visual performance, even in a case of adult-onset blindness. We examined KP, a 71-year-old man who underwent a successful sight-restoring operation after 53 years of blindness. A set of psychophysical tests designed to assess KP's face perception, object recognition, and visual space perception abilities were conducted six months and eight months after the surgery. The results demonstrate that regardless of a lengthy period of normal vision and rich pre-accident perceptual experience, KP did not fully integrate this experience, and his visual performance remained greatly compromised. This was particularly evident when the tasks targeted finer levels of perceptual processing. In addition to the decreased robustness of his memory representations, which was hypothesized as the main factor determining visual impairment, other factors that may have affected KP's performance were considered, including compromised visual functions, problems with perceptual organization, deficits in the simultaneous processing of visual information, and reduced cognitive abilities.

  14. Conscious visual memory with minimal attention.

    PubMed

    Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F

    2017-02-01

    Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Visual cortical activity reflects faster accumulation of information from cortically blind fields

    PubMed Central

    Martin, Tim; Das, Anasuya; Huxlin, Krystel R.

    2012-01-01

    Brain responses (from functional magnetic resonance imaging) and components of information processing were investigated in nine cortically blind observers performing a global direction discrimination task. Three of these subjects had responses in perilesional cortex in response to blind field stimulation, whereas the others did not. We used the EZ-diffusion model of decision making to understand how cortically blind subjects make a perceptual decision on stimuli presented within their blind field. We found that these subjects had slower accumulation of information in their blind fields as compared with their good fields and to intact controls. Within cortically blind subjects, activity in perilesional tissue, V3A and hMT+ was associated with a faster accumulation of information for deciding direction of motion of stimuli presented in the blind field. This result suggests that the rate of information accumulation is a critical factor in the degree of impairment in cortical blindness and varies greatly among affected individuals. Retraining paradigms that seek to restore visual functions might benefit from focusing on increasing the rate of information accumulation. PMID:23169923

  16. Visual imagery in autobiographical memory: The role of repeated retrieval in shifting perspective

    PubMed Central

    Butler, Andrew C.; Rice, Heather J.; Wooldridge, Cynthia L.; Rubin, David C.

    2016-01-01

    Recent memories are generally recalled from a first-person perspective whereas older memories are often recalled from a third-person perspective. We investigated how repeated retrieval affects the availability of visual information, and whether it could explain the observed shift in perspective with time. In Experiment 1, participants performed mini-events and nominated memories of recent autobiographical events in response to cue words. Next, they described their memory for each event and rated its phenomenological characteristics. Over the following three weeks, they repeatedly retrieved half of the mini-event and cue-word memories. No instructions were given about how to retrieve the memories. In Experiment 2, participants were asked to adopt either a first- or third-person perspective during retrieval. One month later, participants retrieved all of the memories and again provided phenomenology ratings. When first-person visual details from the event were repeatedly retrieved, this information was retained better and the shift in perspective was slowed. PMID:27064539

  17. Emotion, Affect, and Risk Communication with Older Adults: Challenges and Opportunities

    PubMed Central

    Finucane, Melissa L.

    2008-01-01

    Recent research suggests that emotion, affect, and cognition play important roles in risk perception and that their roles in judgment and decision-making processes may change over the lifespan. This paper discusses how emotion and affect might help or hinder risk communication with older adults. Currently, there are few guidelines for developing effective risk messages for the world’s aging population, despite the array of complex risk decisions that come with increasing age and the importance of maintaining good decision making in later life. Age-related declines in cognitive abilities such as memory and processing speed, increased reliance on automatic processes, and adaptive motivational shifts toward focusing more on affective (especially positive) information mean that older and younger adults may respond differently to risk messages. Implications for specific risk information formats (probabilities, frequencies, visual displays, and narratives) are discussed and directions for future research are highlighted. PMID:19169420

  18. Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.

    PubMed

    Madan, Christopher R; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B; Sommer, Tobias

    2017-01-01

    Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term 'visual complexity.' Visual complexity can be described as, "a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components." Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an 'arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.

  19. Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye

    PubMed Central

    Madan, Christopher R.; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B.; Sommer, Tobias

    2018-01-01

    Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli. PMID:29403412

  20. Visual search performance among persons with schizophrenia as a function of target eccentricity.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2010-03-01

    The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved

  1. Temporal Processing in the Visual Cortex of the Awake and Anesthetized Rat.

    PubMed

    Aasebø, Ida E J; Lepperød, Mikkel E; Stavrinou, Maria; Nøkkevangen, Sandra; Einevoll, Gaute; Hafting, Torkel; Fyhn, Marianne

    2017-01-01

    The activity pattern and temporal dynamics within and between neuron ensembles are essential features of information processing and believed to be profoundly affected by anesthesia. Much of our general understanding of sensory information processing, including computational models aimed at mathematically simulating sensory information processing, rely on parameters derived from recordings conducted on animals under anesthesia. Due to the high variety of neuronal subtypes in the brain, population-based estimates of the impact of anesthesia may conceal unit- or ensemble-specific effects of the transition between states. Using chronically implanted tetrodes into primary visual cortex (V1) of rats, we conducted extracellular recordings of single units and followed the same cell ensembles in the awake and anesthetized states. We found that the transition from wakefulness to anesthesia involves unpredictable changes in temporal response characteristics. The latency of single-unit responses to visual stimulation was delayed in anesthesia, with large individual variations between units. Pair-wise correlations between units increased under anesthesia, indicating more synchronized activity. Further, the units within an ensemble show reproducible temporal activity patterns in response to visual stimuli that is changed between states, suggesting state-dependent sequences of activity. The current dataset, with recordings from the same neural ensembles across states, is well suited for validating and testing computational network models. This can lead to testable predictions, bring a deeper understanding of the experimental findings and improve models of neural information processing. Here, we exemplify such a workflow using a Brunel network model.

  2. Temporal Processing in the Visual Cortex of the Awake and Anesthetized Rat

    PubMed Central

    Aasebø, Ida E. J.; Stavrinou, Maria; Nøkkevangen, Sandra; Einevoll, Gaute

    2017-01-01

    Abstract The activity pattern and temporal dynamics within and between neuron ensembles are essential features of information processing and believed to be profoundly affected by anesthesia. Much of our general understanding of sensory information processing, including computational models aimed at mathematically simulating sensory information processing, rely on parameters derived from recordings conducted on animals under anesthesia. Due to the high variety of neuronal subtypes in the brain, population-based estimates of the impact of anesthesia may conceal unit- or ensemble-specific effects of the transition between states. Using chronically implanted tetrodes into primary visual cortex (V1) of rats, we conducted extracellular recordings of single units and followed the same cell ensembles in the awake and anesthetized states. We found that the transition from wakefulness to anesthesia involves unpredictable changes in temporal response characteristics. The latency of single-unit responses to visual stimulation was delayed in anesthesia, with large individual variations between units. Pair-wise correlations between units increased under anesthesia, indicating more synchronized activity. Further, the units within an ensemble show reproducible temporal activity patterns in response to visual stimuli that is changed between states, suggesting state-dependent sequences of activity. The current dataset, with recordings from the same neural ensembles across states, is well suited for validating and testing computational network models. This can lead to testable predictions, bring a deeper understanding of the experimental findings and improve models of neural information processing. Here, we exemplify such a workflow using a Brunel network model. PMID:28791331

  3. [Allocation of attentional resource and monitoring processes under rapid serial visual presentation].

    PubMed

    Nishiura, K

    1998-08-01

    With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.

  4. Naturalistic distraction and driving safety in older drivers

    PubMed Central

    Aksan, Nazan; Dawson, Jeffrey D.; Emerson, Jamie L.; Yu, Lixi; Uc, Ergun Y.; Anderson, Steven W.; Rizzo, Matthew

    2013-01-01

    Objective This study aimed to quantify and compare performance of middle-aged and older drivers during a naturalistic distraction paradigm (visual search for roadside targets) and predict older driver performance given functioning in visual, motor, and cognitive domains. Background Distracted driving can imperil healthy adults and may disproportionally affect the safety of older drivers with visual, motor, and cognitive decline. Methods Two hundred and three drivers, 120 healthy older (61 men and 59 women, ages 65 years or greater) and 83 middle-aged drivers (38 men and 45 women, ages 40–64 years), participated in an on-road test in an instrumented vehicle. Outcome measures included performance in roadside target identification (traffic signs and restaurants) and concurrent driver safety. Differences in visual, motor, and cognitive functioning served as predictors. Results Older drivers identified fewer landmarks and drove slower but committed more safety errors than middle-aged drivers. Greater familiarity with local roads benefited performance of middle-aged but not older drivers. Visual cognition predicted both traffic sign identification and safety errors while executive function predicted traffic sign identification over and above vision. Conclusion Older adults are susceptible to driving safety errors while distracted by common secondary visual search tasks that are inherent to driving. The findings underscore that age-related cognitive decline affects older driver management of driving tasks at multiple levels, and can help inform the design of on-road tests and interventions for older drivers. PMID:23964422

  5. This person is saying bad things about you: The influence of physically and socially threatening context information on the processing of inherently neutral faces.

    PubMed

    Klein, Fabian; Iffland, Benjamin; Schindler, Sebastian; Wabnitz, Pascal; Neuner, Frank

    2015-12-01

    Recent studies have shown that the perceptual processing of human faces is affected by context information, such as previous experiences and information about the person represented by the face. The present study investigated the impact of verbally presented information about the person that varied with respect to affect (neutral, physically threatening, socially threatening) and reference (self-referred, other-referred) on the processing of faces with an inherently neutral expression. Stimuli were presented in a randomized presentation paradigm. Event-related potential (ERP) analysis demonstrated a modulation of the evoked potentials by reference at the EPN (early posterior negativity) and LPP (late positive potential) stage and an enhancing effect of affective valence on the LPP (700-1000 ms) with socially threatening context information leading to the most pronounced LPP amplitudes. We also found an interaction between reference and valence with self-related neutral context information leading to more pronounced LPP than other related neutral context information. Our results indicate an impact of self-reference on early, presumably automatic processing stages and also a strong impact of valence on later stages. Using a randomized presentation paradigm, this study confirms that context information affects the visual processing of faces, ruling out possible confounding factors such as facial configuration or conditional learning effects.

  6. Bringing color to emotion: The influence of color on attentional bias to briefly presented emotional images.

    PubMed

    Bekhtereva, Valeria; Müller, Matthias M

    2017-10-01

    Is color a critical feature in emotional content extraction and involuntary attentional orienting toward affective stimuli? Here we used briefly presented emotional distractors to investigate the extent to which color information can influence the time course of attentional bias in early visual cortex. While participants performed a demanding visual foreground task, complex unpleasant and neutral background images were displayed in color or grayscale format for a short period of 133 ms and were immediately masked. Such a short presentation poses a challenge for visual processing. In the visual detection task, participants attended to flickering squares that elicited the steady-state visual evoked potential (SSVEP), allowing us to analyze the temporal dynamics of the competition for processing resources in early visual cortex. Concurrently we measured the visual event-related potentials (ERPs) evoked by the unpleasant and neutral background scenes. The results showed (a) that the distraction effect was greater with color than with grayscale images and (b) that it lasted longer with colored unpleasant distractor images. Furthermore, classical and mass-univariate ERP analyses indicated that, when presented in color, emotional scenes elicited more pronounced early negativities (N1-EPN) relative to neutral scenes, than when the scenes were presented in grayscale. Consistent with neural data, unpleasant scenes were rated as being more emotionally negative and received slightly higher arousal values when they were shown in color than when they were presented in grayscale. Taken together, these findings provide evidence for the modulatory role of picture color on a cascade of coordinated perceptual processes: by facilitating the higher-level extraction of emotional content, color influences the duration of the attentional bias to briefly presented affective scenes in lower-tier visual areas.

  7. Formulating qualitative features using interactive visualization for analysis of multivariate spatiotemporal data

    NASA Astrophysics Data System (ADS)

    Porter, M.; Hill, M. C.; Pierce, S. A.; Gil, Y.; Pennington, D. D.

    2017-12-01

    DiscoverWater is a web-based visualization tool developed to enable the visual representation of data, and thus, aid scientific and societal understanding of hydrologic systems. Open data sources are coalesced to, for example, illustrate the impacts on streamflow of irrigation withdrawals. Scientists and stakeholders are informed through synchronized time-series data plots that correlate multiple spatiotemporal datasets and an interactive time-evolving map that provides a spatial analytical context. Together, these components elucidate trends so that the user can try to envision the relations between groundwater-surface water interactions, the impacts of pumping on these interactions, and the interplay of climate. Aligning data in this manner has the capacity for interdisciplinary knowledge discovery and motivates dialogue about system processes that we seek to enhance through qualitative features informed through quantitative models. DiscoverWater and its connection is demonstrated using two field cases. First, it is used to visualize data sets from the High Plains aquifer, where reservoir- and groundwater-supported irrigation has affected the Arkansas River in western Kansas. Second, data and model results from Barton Springs segment of the Edwards aquifer in Texas reveal the effects of regional pumping on this important urbanizing aquifer system. Identifying what is interesting about the data and the modeled system in the two different case studies is a step towards moving typically static visualization capabilities to an adaptive framework. Additionally, the dashboard interface incorporates both quantitative and qualitative information about distinctive case studies in a machine-readable form, such that a catalog of qualitative models can capture subject matter expertise alongside associated datasets. As the catalog is expanded to include other case studies, the collection has potential to establish a standard framework able to inform intelligent system reasoning.

  8. The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.

    PubMed

    Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc

    2009-05-06

    The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.

  9. Acting to gain information

    NASA Technical Reports Server (NTRS)

    Rosenchein, Stanley J.; Burns, J. Brian; Chapman, David; Kaelbling, Leslie P.; Kahn, Philip; Nishihara, H. Keith; Turk, Matthew

    1993-01-01

    This report is concerned with agents that act to gain information. In previous work, we developed agent models combining qualitative modeling with real-time control. That work, however, focused primarily on actions that affect physical states of the environment. The current study extends that work by explicitly considering problems of active information-gathering and by exploring specialized aspects of information-gathering in computational perception, learning, and language. In our theoretical investigations, we analyzed agents into their perceptual and action components and identified these with elements of a state-machine model of control. The mathematical properties of each was developed in isolation and interactions were then studied. We considered the complexity dimension and the uncertainty dimension and related these to intelligent-agent design issues. We also explored active information gathering in visual processing. Working within the active vision paradigm, we developed a concept of 'minimal meaningful measurements' suitable for demand-driven vision. We then developed and tested an architecture for ongoing recognition and interpretation of visual information. In the area of information gathering through learning, we explored techniques for coping with combinatorial complexity. We also explored information gathering through explicit linguistic action by considering the nature of conversational rules, coordination, and situated communication behavior.

  10. Age, cognitive style, and traffic signs.

    PubMed

    Lambert, L D; Fleury, M

    1994-04-01

    This study assessed the efficiency with which young and older adults of varying field dependence extract information from traffic signs. It also identified some visual attributes of signs which affect recognition time. Two experiments were conducted. In Exp. 1, digitized signs, embedded in rural and urban backgrounds, were presented on a computer monitor. Subjects indicated on which side a target sign had appeared. Analysis showed that recognition times were dependent on age and field-dependence scores. Also, visual backgrounds and spatial frequency of pictographs affected RTs. In Exp. 2, recognition RT to 2 signs with redesigned pictographs was measured as well as time taken to detect signs. The signs showing reduced spatial frequency were the fastest to recognize, although no effect was noticed during detection. The subjects who showed the worst performance when facing the original signs benefitted the most from the modifications.

  11. The effects of 3D interactive animated graphics on student learning and attitudes in computer-based instruction

    NASA Astrophysics Data System (ADS)

    Moon, Hye Sun

    Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition exhibited more positive attitudes toward instruction than those in other treatment conditions (2D static, 2D animated, and 3D static conditions). No group differences were found in the posttest scores among four treatment conditions. However, students in the 3D animated condition took less time for information retrieval on posttest than those in other treatment conditions.

  12. Using GIS in ecological management: green assessment of the impacts of petroleum activities in the state of Texas.

    PubMed

    Merem, Edmund; Robinson, Bennetta; Wesley, Joan M; Yerramilli, Sudha; Twumasi, Yaw A

    2010-05-01

    Geo-information technologies are valuable tools for ecological assessment in stressed environments. Visualizing natural features prone to disasters from the oil sector spatially not only helps in focusing the scope of environmental management with records of changes in affected areas, but it also furnishes information on the pace at which resource extraction affects nature. Notwithstanding the recourse to ecosystem protection, geo-spatial analysis of the impacts remains sketchy. This paper uses GIS and descriptive statistics to assess the ecological impacts of petroleum extraction activities in Texas. While the focus ranges from issues to mitigation strategies, the results point to growth in indicators of ecosystem decline.

  13. Using GIS in Ecological Management: Green Assessment of the Impacts of Petroleum Activities in the State of Texas

    PubMed Central

    Merem, Edmund; Robinson, Bennetta; Wesley, Joan M.; Yerramilli, Sudha; Twumasi, Yaw A.

    2010-01-01

    Geo-information technologies are valuable tools for ecological assessment in stressed environments. Visualizing natural features prone to disasters from the oil sector spatially not only helps in focusing the scope of environmental management with records of changes in affected areas, but it also furnishes information on the pace at which resource extraction affects nature. Notwithstanding the recourse to ecosystem protection, geo-spatial analysis of the impacts remains sketchy. This paper uses GIS and descriptive statistics to assess the ecological impacts of petroleum extraction activities in Texas. While the focus ranges from issues to mitigation strategies, the results point to growth in indicators of ecosystem decline. PMID:20623014

  14. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  15. Distortions of Subjective Time Perception Within and Across Senses

    PubMed Central

    van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan

    2008-01-01

    Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248

  16. Manipulation of the extrastriate frontal loop can resolve visual disability in blindsight patients.

    PubMed

    Badgaiyan, Rajendra D

    2012-12-01

    Patients with blindsight are not consciously aware of visual stimuli in the affected field of vision but retain nonconscious perception. This disability can be resolved if nonconsciously perceived information can be brought to their conscious awareness. It can be accomplished by manipulating neural network of visual awareness. To understand this network, we studied the pattern of cortical activity elicited during processing of visual stimuli with or without conscious awareness. The analysis indicated that a re-entrant signaling loop between the area V3A (located in the extrastriate cortex) and the frontal cortex is critical for processing conscious awareness. The loop is activated by visual signals relayed in the primary visual cortex, which is damaged in blindsight patients. Because of the damage, V3A-frontal loop is not activated and the signals are not processed for conscious awareness. These patients however continue to receive visual signals through the lateral geniculate nucleus. Since these signals do not activate the V3A-frontal loop, the stimuli are not consciously perceived. If visual input from the lateral geniculate nucleus is appropriately manipulated and made to activate the V3A-frontal loop, blindsight patients can regain conscious vision. Published by Elsevier Ltd.

  17. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  18. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  19. Sequential vs simultaneous encoding of spatial information: a comparison between the blind and the sighted.

    PubMed

    Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina

    2012-02-01

    The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Interactive visualization for scar transmurality in cardiac resynchronization therapy

    NASA Astrophysics Data System (ADS)

    Reiml, Sabrina; Toth, Daniel; Panayiotou, Maria; Fahn, Bernhard; Karim, Rashed; Behar, Jonathan M.; Rinaldi, Christopher A.; Razavi, Reza; Rhode, Kawal S.; Brost, Alexander; Mountney, Peter

    2016-03-01

    Heart failure is a serious disease affecting about 23 million people worldwide. Cardiac resynchronization therapy is used to treat patients suffering from symptomatic heart failure. However, 30% to 50% of patients have limited clinical benefit. One of the main causes is suboptimal placement of the left ventricular lead. Pacing in areas of myocardial scar correlates with poor clinical outcomes. Therefore precise knowledge of the individual patient's scar characteristics is critical for delivering tailored treatments capable of improving response rates. Current research methods for scar assessment either map information to an alternative non-anatomical coordinate system or they use the image coordinate system but lose critical information about scar extent and scar distribution. This paper proposes two interactive methods for visualizing relevant scar information. A 2-D slice based approach with a scar mask overlaid on a 16 segment heart model and a 3-D layered mesh visualization which allows physicians to scroll through layers of scar from endocardium to epicardium. These complementary methods enable physicians to evaluate scar location and transmurality during planning and guidance. Six physicians evaluated the proposed system by identifying target regions for lead placement. With the proposed method more target regions could be identified.

  1. Using Tests Designed to Measure Individual Sensorimotor Subsystem Perfomance to Predict Locomotor Adaptability

    NASA Technical Reports Server (NTRS)

    Peters, B. T.; Caldwell, E. E.; Batson, C. D.; Guined, J. R.; DeDios, Y. E.; Stepanyan, V.; Gadd, N. E.; Szecsy, D. L.; Mulavara, A. P.; Seidler, R. D.; hide

    2014-01-01

    Astronauts experience sensorimotor disturbances during the initial exposure to microgravity and during the readapation phase following a return to a gravitational environment. These alterations may lead to disruption in the ability to perform mission critical functions during and after these gravitational transitions. Astronauts show significant inter-subject variation in adaptive capability following gravitational transitions. The way each individual's brain synthesizes the available visual, vestibular and somatosensory information is likely the basis for much of the variation. Identifying the presence of biases in each person's use of information available from these sensorimotor subsystems and relating it to their ability to adapt to a novel locomotor task will allow us to customize a training program designed to enhance sensorimotor adaptability. Eight tests are being used to measure sensorimotor subsystem performance. Three of these use measures of body sway to characterize balance during varying sensorimotor challenges. The effect of vision is assessed by repeating conditions with eyes open and eyes closed. Standing on foam, or on a support surface that pitches to maintain a constant ankle angle provide somatosensory challenges. Information from the vestibular system is isolated when vision is removed and the support surface is compromised, and it is challenged when the tasks are done while the head is in motion. The integration and dominance of visual information is assessed in three additional tests. The Rod & Frame Test measures the degree to which a subject's perception of the visual vertical is affected by the orientation of a tilted frame in the periphery. Locomotor visual dependence is determined by assessing how much an oscillating virtual visual world affects a treadmill-walking subject. In the third of the visual manipulation tests, subjects walk an obstacle course while wearing up-down reversing prisms. The two remaining tests include direct measures of knee and ankle proprioception and a functional movement assessment that screens for movement restrictions and asymmetries. To assess each subject's locomotor adaptability subjects walk for twenty minutes on a treadmill that oscillates laterally at 0.3 Hz. Throughout the test metabolic cost provides a measure of exertion and step frequency provides a measure of stability. Additionally, at four points during the perturbation period, reaction time tests are used to probe changes in the amount of mental effort being used to perform the task. As with the adaptive capability observed in astronauts during gravitational transitions, our data shows significant variability between subjects. To aid in the analysis of the results, custom software tools have been developed to enhance in the visualization of the large number of output variables. Preliminary analyses of the data collected to date do not show a strong relationship between adaptability and any single predictor variable. Analysis continues to identify a multifactorial predictor outcome "signature" that do inform us of locomotor adaptability.

  2. Cal-Adapt: California's Climate Data Resource and Interactive Toolkit

    NASA Astrophysics Data System (ADS)

    Thomas, N.; Mukhtyar, S.; Wilhelm, S.; Galey, B.; Lehmer, E.

    2016-12-01

    Cal-Adapt is a web-based application that provides an interactive toolkit and information clearinghouse to help agencies, communities, local planners, resource managers, and the public understand climate change risks and impacts at the local level. The website offers interactive, visually compelling, and useful data visualization tools that show how climate change might affect California using downscaled continental climate data. Cal-Adapt is supporting California's Fourth Climate Change Assessment through providing access to the wealth of modeled and observed data and adaption-related information produced by California's scientific community. The site has been developed by UC Berkeley's Geospatial Innovation Facility (GIF) in collaboration with the California Energy Commission's (CEC) Research Program. The Cal-Adapt website allows decision makers, scientists and residents of California to turn research results and climate projections into effective adaptation decisions and policies. Since its release to the public in June 2011, Cal-Adapt has been visited by more than 94,000 unique visitors from over 180 countries, all 50 U.S. states, and 689 California localities. We will present several key visualizations that have been employed by Cal-Adapt's users to support their efforts to understand local impacts of climate change, indicate the breadth of data available, and delineate specific use cases. Recently, CEC and GIF have been developing and releasing Cal-Adapt 2.0, which includes updates and enhancements that are increasing its ease of use, information value, visualization tools, and data accessibility. We showcase how Cal-Adapt is evolving in response to feedback from a variety of sources to present finer-resolution downscaled data, and offer an open API that allows other organization to access Cal-Adapt climate data and build domain specific visualization and planning tools. Through a combination of locally relevant information, visualization tools, and access to primary data, Cal-Adapt allows users to investigate how the climate is projected to change in their areas of interest.

  3. Contrast sensitivity test and conventional and high frequency audiometry: information beyond that required to prescribe lenses and headsets

    NASA Astrophysics Data System (ADS)

    Comastri, S. A.; Martin, G.; Simon, J. M.; Angarano, C.; Dominguez, S.; Luzzi, F.; Lanusse, M.; Ranieri, M. V.; Boccio, C. M.

    2008-04-01

    In Optometry and in Audiology, the routine tests to prescribe correction lenses and headsets are respectively the visual acuity test (the first chart with letters was developed by Snellen in 1862) and conventional pure tone audiometry (the first audiometer with electrical current was devised by Hartmann in 1878). At present there are psychophysical non invasive tests that, besides evaluating visual and auditory performance globally and even in cases catalogued as normal according to routine tests, supply early information regarding diseases such as diabetes, hypertension, renal failure, cardiovascular problems, etc. Concerning Optometry, one of these tests is the achromatic luminance contrast sensitivity test (introduced by Schade in 1956). Concerning Audiology, one of these tests is high frequency pure tone audiometry (introduced a few decades ago) which yields information relative to pathologies affecting the basal cochlea and complements data resulting from conventional audiometry. These utilities of the contrast sensitivity test and of pure tone audiometry derive from the facts that Fourier components constitute the basis to synthesize stimuli present at the entrance of the visual and auditory systems; that these systems responses depend on frequencies and that the patient's psychophysical state affects frequency processing. The frequency of interest in the former test is the effective spatial frequency (inverse of the angle subtended at the eye by a cycle of a sinusoidal grating and measured in cycles/degree) and, in the latter, the temporal frequency (measured in cycles/sec). Both tests have similar duration and consist in determining the patient's threshold (corresponding to the inverse multiplicative of the contrast or to the inverse additive of the sound intensity level) for each harmonic stimulus present at the system entrance (sinusoidal grating or pure tone sound). In this article the frequencies, standard normality curves and abnormal threshold shifts inherent to the contrast sensitivity test (which for simplicity could be termed "visionmetry") and to pure tone audiometry (also termed auditory sensitivity test) are analyzed with the purpose of contributing to divulge their ability to supply early information associated to pathologies not solely related to the visual and auditory systems respectively.

  4. Context Effects on Facial Affect Recognition in Schizophrenia and Autism: Behavioral and Eye-Tracking Evidence.

    PubMed

    Sasson, Noah J; Pinkham, Amy E; Weittenhiller, Lauren P; Faso, Daniel J; Simpson, Claire

    2016-05-01

    Although Schizophrenia (SCZ) and Autism Spectrum Disorder (ASD) share impairments in emotion recognition, the mechanisms underlying these impairments may differ. The current study used the novel "Emotions in Context" task to examine how the interpretation and visual inspection of facial affect is modulated by congruent and incongruent emotional contexts in SCZ and ASD. Both adults with SCZ (n= 44) and those with ASD (n= 21) exhibited reduced affect recognition relative to typically-developing (TD) controls (n= 39) when faces were integrated within broader emotional scenes but not when they were presented in isolation, underscoring the importance of using stimuli that better approximate real-world contexts. Additionally, viewing faces within congruent emotional scenes improved accuracy and visual attention to the face for controls more so than the clinical groups, suggesting that individuals with SCZ and ASD may not benefit from the presence of complementary emotional information as readily as controls. Despite these similarities, important distinctions between SCZ and ASD were found. In every condition, IQ was related to emotion-recognition accuracy for the SCZ group but not for the ASD or TD groups. Further, only the ASD group failed to increase their visual attention to faces in incongruent emotional scenes, suggesting a lower reliance on facial information within ambiguous emotional contexts relative to congruent ones. Collectively, these findings highlight both shared and distinct social cognitive processes in SCZ and ASD that may contribute to their characteristic social disabilities. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  5. GeoCrystal: graphic-interactive access to geodata archives

    NASA Astrophysics Data System (ADS)

    Goebel, Stefan; Haist, Joerg; Jasnoch, Uwe

    2002-03-01

    Recently there is spent a lot of effort to establish information systems and global infrastructures enabling both data suppliers and users to describe (-> eCommerce, metadata) as well as to find appropriate data. Examples for this are metadata information systems, online-shops or portals for geodata. The main disadvantages of existing approaches are insufficient methods and mechanisms leading users to (e.g. spatial) data archives. This affects aspects concerning usability and personalization in general as well as visual feedback techniques in the different steps of the information retrieval process. Several approaches aim at the improvement of graphical user interfaces by using intuitive metaphors, but only some of them offer 3D interfaces in the form of information landscapes or geographic result scenes in the context of information systems for geodata. This paper presents GeoCrystal, which basic idea is to adopt Venn diagrams to compose complex queries and to visualize search results in a 3D information and navigation space for geodata. These concepts are enhanced with spatial metaphors and 3D information landscapes (library for geodata) wherein users can specify searches for appropriate geodata and are enabled to graphic-interactively communicate with search results (book metaphor).

  6. Is the Recall of Verbal-Spatial Information from Working Memory Affected by Symptoms of ADHD?

    ERIC Educational Resources Information Center

    Caterino, Linda C.; Verdi, Michael P.

    2012-01-01

    Objective: The Kulhavy model for text learning using organized spatial displays proposes that learning will be increased when participants view visual images prior to related text. In contrast to previous studies, this study also included students who exhibited symptoms of ADHD. Method: Participants were presented with either a map-text or…

  7. Abstract Conceptual Feature Ratings Predict Gaze within Written Word Arrays: Evidence from a Visual Wor(l)d Paradigm

    ERIC Educational Resources Information Center

    Primativo, Silvia; Reilly, Jamie; Crutch, Sebastian J

    2017-01-01

    The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF…

  8. Search space mapping: getting a picture of coherent laser control.

    PubMed

    Shane, Janelle C; Lozovoy, Vadim V; Dantus, Marcos

    2006-10-12

    Search space mapping is a method for quickly visualizing the experimental parameters that can affect the outcome of a coherent control experiment. We demonstrate experimental search space mapping for the selective fragmentation and ionization of para-nitrotoluene and show how this method allows us to gather information about the dominant trends behind our achieved control.

  9. Optimizing text for an individual's visual system: The contribution of visual crowding to reading difficulties.

    PubMed

    Joo, Sung Jun; White, Alex L; Strodtman, Douglas J; Yeatman, Jason D

    2018-06-01

    Reading is a complex process that involves low-level visual processing, phonological processing, and higher-level semantic processing. Given that skilled reading requires integrating information among these different systems, it is likely that reading difficulty-known as dyslexia-can emerge from impairments at any stage of the reading circuitry. To understand contributing factors to reading difficulties within individuals, it is necessary to diagnose the function of each component of the reading circuitry. Here, we investigated whether adults with dyslexia who have impairments in visual processing respond to a visual manipulation specifically targeting their impairment. We collected psychophysical measures of visual crowding and tested how each individual's reading performance was affected by increased text-spacing, a manipulation designed to alleviate severe crowding. Critically, we identified a sub-group of individuals with dyslexia showing elevated crowding and found that these individuals read faster when text was rendered with increased letter-, word- and line-spacing. Our findings point to a subtype of dyslexia involving elevated crowding and demonstrate that individuals benefit from interventions personalized to their specific impairments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. The integration of temporally shifted visual feedback in a synchronization task: The role of perceptual stability in a visuo-proprioceptive conflict situation.

    PubMed

    Ceux, Tanja; Montagne, Gilles; Buekers, Martinus J

    2010-12-01

    The present study examined whether the beneficial role of coherently grouped visual motion structures for performing complex (interlimb) coordination patterns can be generalized to synchronization behavior in a visuo-proprioceptive conflict situation. To achieve this goal, 17 participants had to synchronize a self-moved circle, representing the arm movement, with a visual target signal corresponding to five temporally shifted visual feedback conditions (0%, 25%, 50%, 75%, and 100% of the target cycle duration) in three synchronization modes (in-phase, anti-phase, and intermediate). The results showed that the perception of a newly generated perceptual Gestalt between the visual feedback of the arm and the target signal facilitated the synchronization performance in the preferred in-phase synchronization mode in contrast to the less stable anti-phase and intermediate mode. Our findings suggest that the complexity of the synchronization mode defines to what extent the visual and/or proprioceptive information source affects the synchronization performance in the present unimanual synchronization task. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Path integration: effect of curved path complexity and sensory system on blindfolded walking.

    PubMed

    Koutakis, Panagiotis; Mukherjee, Mukul; Vallabhajosula, Srikant; Blanke, Daniel J; Stergiou, Nicholas

    2013-02-01

    Path integration refers to the ability to integrate continuous information of the direction and distance traveled by the system relative to the origin. Previous studies have investigated path integration through blindfolded walking along simple paths such as straight line and triangles. However, limited knowledge exists regarding the role of path complexity in path integration. Moreover, little is known about how information from different sensory input systems (like vision and proprioception) contributes to accurate path integration. The purpose of the current study was to investigate how sensory information and curved path complexity affect path integration. Forty blindfolded participants had to accurately reproduce a curved path and return to the origin. They were divided into four groups that differed in the curved path, circle (simple) or figure-eight (complex), and received either visual (previously seen) or proprioceptive (previously guided) information about the path before they reproduced it. The dependent variables used were average trajectory error, walking speed, and distance traveled. The results indicated that (a) both groups that walked on a circular path and both groups that received visual information produced greater accuracy in reproducing the path. Moreover, the performance of the group that received proprioceptive information and later walked on a figure-eight path was less accurate than their corresponding circular group. The groups that had the visual information also walked faster compared to the group that had proprioceptive information. Results of the current study highlight the roles of different sensory inputs while performing blindfolded walking for path integration. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. A comparison of visuomotor cue integration strategies for object placement and prehension.

    PubMed

    Greenwald, Hal S; Knill, David C

    2009-01-01

    Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.

  13. The Affective Bases of Risk Perception: Negative Feelings and Stress Mediate the Relationship between Mental Imagery and Risk Perception.

    PubMed

    Sobkow, Agata; Traczyk, Jakub; Zaleskiewicz, Tomasz

    2016-01-01

    Recent research has documented that affect plays a crucial role in risk perception. When no information about numerical risk estimates is available (e.g., probability of loss or magnitude of consequences), people may rely on positive and negative affect toward perceived risk. However, determinants of affective reactions to risks are poorly understood. In a series of three experiments, we addressed the question of whether and to what degree mental imagery eliciting negative affect and stress influences risk perception. In each experiment, participants were instructed to visualize consequences of risk taking and to rate riskiness. In Experiment 1, participants who imagined negative risk consequences reported more negative affect and perceived risk as higher compared to the control condition. In Experiment 2, we found that this effect was driven by affect elicited by mental imagery rather than its vividness and intensity. In this study, imagining positive risk consequences led to lower perceived risk than visualizing negative risk consequences. Finally, we tested the hypothesis that negative affect related to higher perceived risk was caused by negative feelings of stress. In Experiment 3, we introduced risk-irrelevant stress to show that participants in the stress condition rated perceived risk as higher in comparison to the control condition. This experiment showed that higher ratings of perceived risk were influenced by psychological stress. Taken together, our results demonstrate that affect-laden mental imagery dramatically changes risk perception through negative affect (i.e., psychological stress).

  14. The Affective Bases of Risk Perception: Negative Feelings and Stress Mediate the Relationship between Mental Imagery and Risk Perception

    PubMed Central

    Sobkow, Agata; Traczyk, Jakub; Zaleskiewicz, Tomasz

    2016-01-01

    Recent research has documented that affect plays a crucial role in risk perception. When no information about numerical risk estimates is available (e.g., probability of loss or magnitude of consequences), people may rely on positive and negative affect toward perceived risk. However, determinants of affective reactions to risks are poorly understood. In a series of three experiments, we addressed the question of whether and to what degree mental imagery eliciting negative affect and stress influences risk perception. In each experiment, participants were instructed to visualize consequences of risk taking and to rate riskiness. In Experiment 1, participants who imagined negative risk consequences reported more negative affect and perceived risk as higher compared to the control condition. In Experiment 2, we found that this effect was driven by affect elicited by mental imagery rather than its vividness and intensity. In this study, imagining positive risk consequences led to lower perceived risk than visualizing negative risk consequences. Finally, we tested the hypothesis that negative affect related to higher perceived risk was caused by negative feelings of stress. In Experiment 3, we introduced risk-irrelevant stress to show that participants in the stress condition rated perceived risk as higher in comparison to the control condition. This experiment showed that higher ratings of perceived risk were influenced by psychological stress. Taken together, our results demonstrate that affect-laden mental imagery dramatically changes risk perception through negative affect (i.e., psychological stress). PMID:27445901

  15. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.

    PubMed

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-03-09

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.

  16. Visual Information Processing Based on Spatial Filters Constrained by Biological Data.

    DTIC Science & Technology

    1978-12-01

    was provided by Pantie and Sekuler ( 19681. They found that the detection (if gratings was affected most by adapting isee Section 6.1. 11 to square...evidence for certain eye scans being directed by spatial information in filtered images is given. Eye scan paths of a portrait of a young girl I Figure 08...multistable objects to more complex objects such as the man- girl figure of Fisher 119681, decision boundaries that are a natural concomitant to any pattern

  17. Causal evidence for frontal involvement in memory target maintenance by posterior brain areas during distracter interference of visual working memory

    PubMed Central

    Feredoes, Eva; Heinen, Klaartje; Weiskopf, Nikolaus; Ruff, Christian; Driver, Jon

    2011-01-01

    Dorsolateral prefrontal cortex (DLPFC) is recruited during visual working memory (WM) when relevant information must be maintained in the presence of distracting information. The mechanism by which DLPFC might ensure successful maintenance of the contents of WM is, however, unclear; it might enhance neural maintenance of memory targets or suppress processing of distracters. To adjudicate between these possibilities, we applied time-locked transcranial magnetic stimulation (TMS) during functional MRI, an approach that permits causal assessment of a stimulated brain region's influence on connected brain regions, and evaluated how this influence may change under different task conditions. Participants performed a visual WM task requiring retention of visual stimuli (faces or houses) across a delay during which visual distracters could be present or absent. When distracters were present, they were always from the opposite stimulus category, so that targets and distracters were represented in distinct posterior cortical areas. We then measured whether DLPFC-TMS, administered in the delay at the time point when distracters could appear, would modulate posterior regions representing memory targets or distracters. We found that DLPFC-TMS influenced posterior areas only when distracters were present and, critically, that this influence consisted of increased activity in regions representing the current memory targets. DLPFC-TMS did not affect regions representing current distracters. These results provide a new line of causal evidence for a top-down DLPFC-based control mechanism that promotes successful maintenance of relevant information in WM in the presence of distraction. PMID:21987824

  18. Haltere mechanosensory influence on tethered flight behavior in Drosophila.

    PubMed

    Mureli, Shwetha; Fox, Jessica L

    2015-08-01

    In flies, mechanosensory information from modified hindwings known as halteres is combined with visual information for wing-steering behavior. Haltere input is necessary for free flight, making it difficult to study the effects of haltere ablation under natural flight conditions. We thus used tethered Drosophila melanogaster flies to examine the relationship between halteres and the visual system, using wide-field motion or moving figures as visual stimuli. Haltere input was altered by surgically decreasing its mass, or by removing it entirely. Haltere removal does not affect the flies' ability to flap or steer their wings, but it does increase the temporal frequency at which they modify their wingbeat amplitude. Reducing the haltere mass decreases the optomotor reflex response to wide-field motion, and removing the haltere entirely does not further decrease the response. Decreasing the mass does not attenuate the response to figure motion, but removing the entire haltere does attenuate the response. When flies are allowed to control a visual stimulus in closed-loop conditions, haltereless flies fixate figures with the same acuity as intact flies, but cannot stabilize a wide-field stimulus as accurately as intact flies can. These manipulations suggest that the haltere mass is influential in wide-field stabilization, but less so in figure tracking. In both figure and wide-field experiments, we observe responses to visual motion with and without halteres, indicating that during tethered flight, intact halteres are not strictly necessary for visually guided wing-steering responses. However, the haltere feedback loop may operate in a context-dependent way to modulate responses to visual motion. © 2015. Published by The Company of Biologists Ltd.

  19. Eye Movements Affect Postural Control in Young and Older Females

    PubMed Central

    Thomas, Neil M.; Bampouras, Theodoros M.; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions. PMID:27695412

  20. Eye Movements Affect Postural Control in Young and Older Females.

    PubMed

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  1. Auditory and Visual Interhemispheric Communication in Musicians and Non-Musicians

    PubMed Central

    Woelfle, Rebecca; Grahn, Jessica A.

    2013-01-01

    The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer. PMID:24386382

  2. [Conception and Content Validation of a Questionnaire Relating to the Potential Need for Information of Visually Impaired Persons with Regard to Services and Contact Persons].

    PubMed

    Hahn, U; Hechler, T; Witt, U; Krummenauer, F

    2015-12-01

    A questionnaire was drafted to identify the needs of visually impaired persons and to optimize their access to non-medical support and services. Subjects had to rate a list of 15 everyday activities that are typically affected by visual impairment (for example, being able to orient themselves in the home environment), by indicating the degree to which they perceive each activity to be affected, using a four-stage scale. They had to evaluate these aspects by means of a relevance assessment. The needs profile derived from this is then correlated with individualized information for assistance and support. The questionnaire shall be made available for use by subjects through advisers in some ophthalmic practices and via the internet. The validity of the content of the proposed tool was evaluated on the basis of a survey of 59 experts in the fields of medical, optical and psychological care and of persons involved in training initiatives. The experts were asked to rate the activities by relevance and clarity of the wording and to propose methods to further develop and optimize the content. The validity of the content was quantified according to a process adopted in the literature, based on the parameters Interrater Agreement (IRA) and Content Validity Index (CVI). The results of all responses (n = 19) and the sub-group analysis suggest that the questionnaire adequately reflects the potential needs profile of visually impaired persons. Overall, there was at least 80% agreement among the 19 experts for 93% of the proposed parameterisation of the activities relating to the relevance and clarity of the wording. Individual proposals for optimization of the design of the questionnaire were adopted. Georg Thieme Verlag KG Stuttgart · New York.

  3. Concept mapping One-Carbon Metabolism to model future ontologies for nutrient-gene-phenotype interactions.

    PubMed

    Joslin, A C; Green, R; German, J B; Lange, M C

    2014-09-01

    Advances in the development of bioinformatic tools continue to improve investigators' ability to interrogate, organize, and derive knowledge from large amounts of heterogeneous information. These tools often require advanced technical skills not possessed by life scientists. User-friendly, low-barrier-to-entry methods of visualizing nutrigenomics information are yet to be developed. We utilized concept mapping software from the Institute for Human and Machine Cognition to create a conceptual model of diet and health-related data that provides a foundation for future nutrigenomics ontologies describing published nutrient-gene/polymorphism-phenotype data. In this model, maps containing phenotype, nutrient, gene product, and genetic polymorphism interactions are visualized as triples of two concepts linked together by a linking phrase. These triples, or "knowledge propositions," contextualize aggregated data and information into easy-to-read knowledge maps. Maps of these triples enable visualization of genes spanning the One-Carbon Metabolism (OCM) pathway, their sequence variants, and multiple literature-mined associations including concepts relevant to nutrition, phenotypes, and health. The concept map development process documents the incongruity of information derived from pathway databases versus literature resources. This conceptual model highlights the importance of incorporating information about genes in upstream pathways that provide substrates, as well as downstream pathways that utilize products of the pathway under investigation, in this case OCM. Other genes and their polymorphisms, such as TCN2 and FUT2, although not directly involved in OCM, potentially alter OCM pathway functionality. These upstream gene products regulate substrates such as B12. Constellations of polymorphisms affecting the functionality of genes along OCM, together with substrate and cofactor availability, may impact resultant phenotypes. These conceptual maps provide a foundational framework for development of nutrient-gene/polymorphism-phenotype ontologies and systems visualization.

  4. The case for visual analytics of arsenic concentrations in foods.

    PubMed

    Johnson, Matilda O; Cohly, Hari H P; Isokpehi, Raphael D; Awofolu, Omotayo R

    2010-05-01

    Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species.

  5. The Case for Visual Analytics of Arsenic Concentrations in Foods

    PubMed Central

    Johnson, Matilda O.; Cohly, Hari H.P.; Isokpehi, Raphael D.; Awofolu, Omotayo R.

    2010-01-01

    Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species. PMID:20623005

  6. A visual metaphor describing neural dynamics in schizophrenia.

    PubMed

    van Beveren, Nico J M; de Haan, Lieuwe

    2008-07-09

    In many scientific disciplines the use of a metaphor as an heuristic aid is not uncommon. A well known example in somatic medicine is the 'defense army metaphor' used to characterize the immune system. In fact, probably a large part of the everyday work of doctors consists of 'translating' scientific and clinical information (i.e. causes of disease, percentage of success versus risk of side-effects) into information tailored to the needs and capacities of the individual patient. The ability to do so in an effective way is at least partly what makes a clinician a good communicator. Schizophrenia is a severe psychiatric disorder which affects approximately 1% of the population. Over the last two decades a large amount of molecular-biological, imaging and genetic data have been accumulated regarding the biological underpinnings of schizophrenia. However, it remains difficult to understand how the characteristic symptoms of schizophrenia such as hallucinations and delusions are related to disturbances on the molecular-biological level. In general, psychiatry seems to lack a conceptual framework with sufficient explanatory power to link the mental- and molecular-biological domains. Here, we present an essay-like study in which we propose to use visualized concepts stemming from the theory on dynamical complex systems as a 'visual metaphor' to bridge the mental- and molecular-biological domains in schizophrenia. We first describe a computer model of neural information processing; we show how the information processing in this model can be visualized, using concepts from the theory on complex systems. We then describe two computer models which have been used to investigate the primary theory on schizophrenia, the neurodevelopmental model, and show how disturbed information processing in these two computer models can be presented in terms of the visual metaphor previously described. Finally, we describe the effects of dopamine neuromodulation, of which disturbances have been frequently described in schizophrenia, in terms of the same visualized metaphor. The conceptual framework and metaphor described offers a heuristic tool to understand the relationship between the mental- and molecular-biological domains in an intuitive way. The concepts we present may serve to facilitate communication between researchers, clinicians and patients.

  7. Breaking the cycle: extending the persistent pain cycle diagram using an affective pictorial metaphor.

    PubMed

    Stones, Catherine; Cole, Frances

    2014-01-01

    The persistent pain cycle diagram is a common feature of pain management literature. but how is it designed and is it fulfilling its potential in terms of providing information to motivate behavioral change? This article examines on-line persistent pain diagrams and critically discusses their purpose and design approach. By using broad information design theories by Karabeg and particular approaches to dialogic visual communications in business, this article argues the need for motivational as well as cognitive diagrams. It also outlines the design of a new persistent pain cycle that is currently being used with chronic pain patients in NHS Bradford, UK. This new cycle adopts and then visually extends an established verbal metaphor within acceptance and commitment therapy (ACT) in an attempt to increase the motivational aspects of the vicious circle diagram format.

  8. Alterations in visual cortical activation and connectivity with prefrontal cortex during working memory updating in major depressive disorder.

    PubMed

    Le, Thang M; Borghi, John A; Kujawa, Autumn J; Klein, Daniel N; Leung, Hoi-Chung

    2017-01-01

    The present study examined the impacts of major depressive disorder (MDD) on visual and prefrontal cortical activity as well as their connectivity during visual working memory updating and related them to the core clinical features of the disorder. Impairment in working memory updating is typically associated with the retention of irrelevant negative information which can lead to persistent depressive mood and abnormal affect. However, performance deficits have been observed in MDD on tasks involving little or no demand on emotion processing, suggesting dysfunctions may also occur at the more basic level of information processing. Yet, it is unclear how various regions in the visual working memory circuit contribute to behavioral changes in MDD. We acquired functional magnetic resonance imaging data from 18 unmedicated participants with MDD and 21 age-matched healthy controls (CTL) while they performed a visual delayed recognition task with neutral faces and scenes as task stimuli. Selective working memory updating was manipulated by inserting a cue in the delay period to indicate which one or both of the two memorized stimuli (a face and a scene) would remain relevant for the recognition test. Our results revealed several key findings. Relative to the CTL group, the MDD group showed weaker postcue activations in visual association areas during selective maintenance of face and scene working memory. Across the MDD subjects, greater rumination and depressive symptoms were associated with more persistent activation and connectivity related to no-longer-relevant task information. Classification of postcue spatial activation patterns of the scene-related areas was also less consistent in the MDD subjects compared to the healthy controls. Such abnormalities appeared to result from a lack of updating effects in postcue functional connectivity between prefrontal and scene-related areas in the MDD group. In sum, disrupted working memory updating in MDD was revealed by alterations in activity patterns of the visual association areas, their connectivity with the prefrontal cortex, and their relationship with core clinical characteristics. These results highlight the role of information updating deficits in the cognitive control and symptomatology of depression.

  9. Steady-state visual evoked potentials as a research tool in social affective neuroscience

    PubMed Central

    Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas

    2017-01-01

    Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794

  10. Perceptual learning modifies untrained pursuit eye movements.

    PubMed

    Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa

    2014-07-07

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.

  11. Perceptual learning modifies untrained pursuit eye movements

    PubMed Central

    Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa

    2014-01-01

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. PMID:25002412

  12. Visualizing disaster attitudes resulting from terrorist activities.

    PubMed

    Khalid, Halimahtun M; Helander, Martin G; Hood, Nilwan A

    2013-09-01

    The purpose of this study was to analyze people's attitudes to disasters by investigating how people feel, behave and think during disasters. We focused on disasters induced by humans, such as terrorist attacks. Two types of textual information were collected - from Internet blogs and from research papers. The analysis enabled forecasting of attitudes for the design of proactive disaster advisory scheme. Text was analyzed using a text mining tool, Leximancer. The outcome of this analysis revealed core themes and concepts in the text concerning people's attitudes. The themes and concepts were sorted into three broad categories: Affect, Behaviour, and Cognition (ABC), and the data was visualized in semantic maps. The maps reveal several knowledge pathways of ABC for developing attitudinal ontologies, which describe the relations between affect, behaviour and cognition, and the sequence in which they develop. Clearly, terrorist attacks induced trauma and people became highly vulnerable. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. When size matters: attention affects performance by contrast or response gain.

    PubMed

    Herrmann, Katrin; Montaser-Kouhsari, Leila; Carrasco, Marisa; Heeger, David J

    2010-12-01

    Covert attention, the selective processing of visual information in the absence of eye movements, improves behavioral performance. We found that attention, both exogenous (involuntary) and endogenous (voluntary), can affect performance by contrast or response gain changes, depending on the stimulus size and the relative size of the attention field. These two variables were manipulated in a cueing task while stimulus contrast was varied. We observed a change in behavioral performance consonant with a change in contrast gain for small stimuli paired with spatial uncertainty and a change in response gain for large stimuli presented at one location (no uncertainty) and surrounded by irrelevant flanking distracters. A complementary neuroimaging experiment revealed that observers' attention fields were wider with than without spatial uncertainty. Our results support important predictions of the normalization model of attention and reconcile previous, seemingly contradictory findings on the effects of visual attention.

  14. Colour-induced relationship between affect and reaching kinematics during a goal-directed aiming task.

    PubMed

    Williams, Camille K; Grierson, Lawrence E M; Carnahan, Heather

    2011-08-01

    A link between affect and action has been supported by the discovery that threat information is prioritized through an action-centred pathway--the dorsal visual stream. Magnocellular afferents, which originate from the retina and project to dorsal stream structures, are suppressed by exposure to diffuse red light, which diminishes humans' perception of threat-based images. In order to explore the role of colour in the relationship between affect and action, participants donned different pairs of coloured glasses (red, yellow, green, blue and clear) and completed Positive and Negative Affect Scale questionnaires as well as a series of target-directed aiming movements. Analyses of affect scores revealed a significant main effect for affect valence and a significant interaction between colour and valence: perceived positive affect was significantly smaller for the red condition. Kinematic analyses of variable error in the primary movement direction and Pearson correlation analyses between the displacements travelled prior to and following peak velocity indicated reduced accuracy and application of online control processes while wearing red glasses. Variable error of aiming was also positively and significantly correlated with negative affect scores under the red condition. These results suggest that only red light modulates the affect-action link by suppressing magnocellular activity, which disrupts visual processing for movement control. Furthermore, previous research examining the effect of the colour red on psychomotor tasks and perceptual acceleration of threat-based imagery suggest that stimulus-driven motor performance tasks requiring online control may be particularly susceptible to this effect.

  15. The roles of vocal and visual interactions in social learning zebra finches: A video playback experiment.

    PubMed

    Guillette, Lauren M; Healy, Susan D

    2017-06-01

    The transmission of information from an experienced demonstrator to a naïve observer often depends on characteristics of the demonstrator, such as familiarity, success or dominance status. Whether or not the demonstrator pays attention to and/or interacts with the observer may also affect social information acquisition or use by the observer. Here we used a video-demonstrator paradigm first to test whether video demonstrators have the same effect as using live demonstrators in zebra finches, and second, to test the importance of visual and vocal interactions between the demonstrator and observer on social information use by the observer. We found that female zebra finches copied novel food choices of male demonstrators they saw via live-streaming video while they did not consistently copy from the demonstrators when they were seen in playbacks of the same videos. Although naive observers copied in the absence of vocalizations by the demonstrator, as they copied from playback of videos with the sound off, females did not copy where there was a mis-match between the visual information provided by the video and vocal information from a live male that was out of sight. Taken together these results suggest that video demonstration is a useful methodology for testing social information transfer, at least in a foraging context, but more importantly, that social information use varies according to the vocal interactions, or lack thereof, between the observer and the demonstrator. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Opportunity for information search and the effect of false heart rate feedback.

    PubMed

    Barefoot, John C; Straub, Ronald B

    2005-01-01

    The role of information search in the attribution of physiological states was investigated by manipulating the subject's opportunity for information search following the presentation of false information about his heart-rate reactions to photographs of female nudes. Consistent with the self-persuasion hypothesis proposed by Valins, the rated attractiveness of the slides was not affected by the false heart-rate feedback for those subjects who were prevented from visually searching the slides. Those subjects who had ample opportunity to view the slides rated those slides accompanied by false information of a heart-rate change as more attractive than those slides which were not paired with a change in heart rate.

  17. People-oriented Information Visualization Design

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyong; Zhang, Bolun

    2018-04-01

    In the 21st century with rapid development, in the wake of the continuous progress of science and technology, human society enters the information era and the era of big data, and the lifestyle and aesthetic system also change accordingly, so the emerging field of information visualization is increasingly popular. Information visualization design is the process of visualizing all kinds of tedious information data, so as to quickly accept information and save time-cost. Along with the development of the process of information visualization, information design, also becomes hotter and hotter, and emotional design, people-oriented design is an indispensable part of in the design of information. This paper probes information visualization design through emotional analysis of information design based on the social context of people-oriented experience from the perspective of art design. Based on the three levels of emotional information design: instinct level, behavior level and reflective level research, to explore and discuss information visualization design.

  18. Taking Word Clouds Apart: An Empirical Investigation of the Design Space for Keyword Summaries.

    PubMed

    Felix, Cristian; Franconeri, Steven; Bertini, Enrico

    2018-01-01

    In this paper we present a set of four user studies aimed at exploring the visual design space of what we call keyword summaries: lists of words with associated quantitative values used to help people derive an intuition of what information a given document collection (or part of it) may contain. We seek to systematically study how different visual representations may affect people's performance in extracting information out of keyword summaries. To this purpose, we first create a design space of possible visual representations and compare the possible solutions in this design space through a variety of representative tasks and performance metrics. Other researchers have, in the past, studied some aspects of effectiveness with word clouds, however, the existing literature is somewhat scattered and do not seem to address the problem in a sufficiently systematic and holistic manner. The results of our studies showed a strong dependency on the tasks users are performing. In this paper we present details of our methodology, the results, as well as, guidelines on how to design effective keyword summaries based in our discoveries.

  19. First- and second-order contrast sensitivity functions reveal disrupted visual processing following mild traumatic brain injury.

    PubMed

    Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza

    2016-05-01

    Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Technical parameters for specifying imagery requirements

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.; Dunnette, Sheri J.

    1994-01-01

    Providing visual information acquired from remote events to various operators, researchers, and practitioners has become progressively more important as the application of special skills in alien or hazardous situations increases. To provide an understanding of the technical parameters required to specify imagery, we have identified, defined, and discussed seven salient characteristics of images: spatial resolution, linearity, luminance resolution, spectral discrimination, temporal discrimination, edge definition, and signal-to-noise ratio. We then describe a generalizing imaging system and identified how various parts of the system affect the image data. To emphasize the different applications of imagery, we have constrasted the common television system with the significant parameters of a televisual imaging system for technical applications. Finally, we have established a method by which the required visual information can be specified by describing certain technical parameters which are directly related to the information content of the imagery. This method requires the user to complete a form listing all pertinent data requirements for the imagery.

  1. The Perception of Cooperativeness Without Any Visual or Auditory Communication.

    PubMed

    Chang, Dong-Seon; Burger, Franziska; Bülthoff, Heinrich H; de la Rosa, Stephan

    2015-12-01

    Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.

  2. The Perception of Cooperativeness Without Any Visual or Auditory Communication

    PubMed Central

    Chang, Dong-Seon; Burger, Franziska; de la Rosa, Stephan

    2015-01-01

    Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal. PMID:27551362

  3. Handbook for Parents of Preschool Blind Children (Manuel A L'Intention Des Parents D'Enfants Aveugles D'Age Prescolaire).

    ERIC Educational Resources Information Center

    Davidson, Iain; And Others

    Available in English and French and intended for parents and professional workers such as nursery school teachers and day care workers, the handbook provides information on the way visual impairment affects the child's development and instructions for guidance in the early years. Sections cover the following topics: reactions to blindness by the…

  4. The Bicycle Illusion: Sidewalk Science Informs the Integration of Motion and Shape Perception

    ERIC Educational Resources Information Center

    Masson, Michael E. J.; Dodd, Michael D.; Enns, James T.

    2009-01-01

    The authors describe a new visual illusion first discovered in a natural setting. A cyclist riding beside a pair of sagging chains that connect fence posts appears to move up and down with the chains. In this illusion, a static shape (the chains) affects the perception of a moving shape (the bicycle), and this influence involves assimilation…

  5. Introducing 3D Visualization of Statistical Data in Education Using the i-Use Platform: Examples from Greece

    ERIC Educational Resources Information Center

    Rizou, Ourania; Klonari, Aikaterini

    2016-01-01

    In the 21st century, the age of information and technology, there is an increasing importance to statistical literacy for everyday life. In addition, education innovation and globalisation in the past decade in Europe has resulted in a new perceived complexity of reality that affected the curriculum and statistics education, with a shift from…

  6. Assessment of Attentional Workload while Driving by Eye-fixation-related Potentials

    NASA Astrophysics Data System (ADS)

    Takeda, Yuji; Yoshitsugu, Noritoshi; Itoh, Kazuya; Kanamori, Nobuhiro

    How do drivers cope with the attentional workload of in-vehicle information technology? In the present study, we propose a new psychophysiological measure for assessing drivers' attention: eye-fixation-related potential (EFRP). EFRP is a kind of event-related brain potential measurable at the eye-movement situation that reflects how closely observers examine visual information at the eye-fixated position. In the experiment, the effects of verbal working memory load and spatial working memory load during simulated driving were examined by measuring the number of saccadic eye-movements and EFRP as the indices of drivers' attention. The results showed that the spatial working memory load affected both the number of saccadic eye-movements and the amplitude of the P100 component of EFRP, whereas the verbal working memory load affected only the number of saccadic eye-movements. This implies that drivers can perform time-sharing processing between driving and the verbal working memory task, but the decline of accuracy of visual processing during driving is inescapable when the spatial working memory load is given. The present study suggests that EFRP can provide a new index of drivers' attention, other than saccadic eye-movements.

  7. The contribution of visual and vestibular information to spatial orientation by 6- to 14-month-old infants and adults.

    PubMed

    Bremner, J Gavin; Hatton, Fran; Foster, Kirsty A; Mason, Uschi

    2011-09-01

    Although there is much research on infants' ability to orient in space, little is known regarding the information they use to do so. This research uses a rotating room to evaluate the relative contribution of visual and vestibular information to location of a target following bodily rotation. Adults responded precisely on the basis of visual flow information. Seven-month-olds responded mostly on the basis of visual flow, whereas 9-month-olds responded mostly on the basis of vestibular information, and 12-month-olds responded mostly on the basis of visual information. Unlike adults, infants of all ages showed partial influence by both modalities. Additionally, 7-month-olds were capable of using vestibular information when there was no visual information for movement or stability, and 9-month-olds still relied on vestibular information when visual information was enhanced. These results are discussed in the context of neuroscientific evidence regarding visual-vestibular interaction, and in relation to possible changes in reliance on visual and vestibular information following acquisition of locomotion. © 2011 Blackwell Publishing Ltd.

  8. Increased discriminability of authenticity from multimodal laughter is driven by auditory information.

    PubMed

    Lavan, Nadine; McGettigan, Carolyn

    2017-10-01

    We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

  9. Hearing Feelings: Affective Categorization of Music and Speech in Alexithymia, an ERP Study

    PubMed Central

    Goerlich, Katharina Sophia; Witteman, Jurriaan; Aleman, André; Martens, Sander

    2011-01-01

    Background Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials. Methodology Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets. Conclusions Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required. PMID:21573026

  10. Cued Speech for Enhancing Speech Perception and First Language Development of Children With Cochlear Implants

    PubMed Central

    Leybaert, Jacqueline; LaSasso, Carol J.

    2010-01-01

    Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants. PMID:20724357

  11. Fuzzy-based simulation of real color blindness.

    PubMed

    Lee, Jinmi; dos Santos, Wellington P

    2010-01-01

    About 8% of men are affected by color blindness. That population is at a disadvantage since they cannot perceive a substantial amount of the visual information. This work presents two computational tools developed to assist color blind people. The first one tests color blindness and assess its severity. The second tool is based on Fuzzy Logic, and implements a method proposed to simulate real red and green color blindness in order to generate synthetic cases of color vision disturbance in a statistically significant amount. Our purpose is to develop correction tools and obtain a deeper understanding of the accessibility problems faced by people with chromatic visual impairment.

  12. Does the Visual Appeal of Instructional Media Affect Learners' Motivation toward Learning?

    ERIC Educational Resources Information Center

    Tomita, Kei

    2018-01-01

    While authors like Mayer (2009) suggest that designers should avoid using visuals for the purpose of attracting learners' interests, some scholars suggest that visuals could influence learners' emotions. In this study the author investigated whether the perception of the visual appeal of instructional handouts affects learners' self-reported…

  13. Culture Wires the Brain: A Cognitive Neuroscience Perspective.

    PubMed

    Park, Denise C; Huang, Chih-Mao

    2010-07-01

    There is clear evidence that sustained experiences may affect both brain structure and function. Thus, it is quite reasonable to posit that sustained exposure to a set of cultural experiences and behavioral practices will affect neural structure and function. The burgeoning field of cultural psychology has often demonstrated the subtle differences in the way individuals process information-differences that appear to be a product of cultural experiences. We review evidence that the collectivistic and individualistic biases of East Asian and Western cultures, respectively, affect neural structure and function. We conclude that there is limited evidence that cultural experiences affect brain structure and considerably more evidence that neural function is affected by culture, particularly activations in ventral visual cortex-areas associated with perceptual processing. © The Author(s) 2010.

  14. Visualizing blood vessel trees in three dimensions: clinical applications

    NASA Astrophysics Data System (ADS)

    Bullitt, Elizabeth; Aylward, Stephen

    2005-04-01

    A connected network of blood vessels surrounds and permeates almost every organ of the human body. The ability to define detailed blood vessel trees enables a variety of clinical applications. This paper discusses four such applications and some of the visualization challenges inherent to each. Guidance of endovascular surgery: 3D vessel trees offer important information unavailable by traditional x-ray projection views. How best to combine the 2- and 3D image information is unknown. Planning/guidance of tumor surgery: During tumor resection it is critical to know which blood vessels can be interrupted safely and which cannot. Providing efficient, clear information to the surgeon together with measures of uncertainty in both segmentation and registration can be a complex problem. Vessel-based registration: Vessel-based registration allows pre-and intraoperative images to be registered rapidly. The approach both provides a potential solution to a difficult clinical dilemma and offers a variety of visualization opportunities. Diagnosis/staging of disease: Almost every disease affects blood vessel morphology. The statistical analysis of vessel shape may thus prove to be an important tool in the noninvasive analysis of disease. A plethora of information is available that must be presented meaningfully to the clinician. As medical image analysis methods increase in sophistication, an increasing amount of useful information of varying types will become available to the clinician. New methods must be developed to present a potentially bewildering amount of complex data to individuals who are often accustomed to viewing only tissue slices or flat projection views.

  15. Visual control of foot placement when walking over complex terrain.

    PubMed

    Matthis, Jonathan S; Fajen, Brett R

    2014-02-01

    The aim of this study was to investigate the role of visual information in the control of walking over complex terrain with irregularly spaced obstacles. We developed an experimental paradigm to measure how far along the future path people need to see in order to maintain forward progress and avoid stepping on obstacles. Participants walked over an array of randomly distributed virtual obstacles that were projected onto the floor by an LCD projector while their movements were tracked by a full-body motion capture system. Walking behavior in a full-vision control condition was compared with behavior in a number of other visibility conditions in which obstacles did not appear until they fell within a window of visibility centered on the moving observer. Collisions with obstacles were more frequent and, for some participants, walking speed was slower when the visibility window constrained vision to less than two step lengths ahead. When window sizes were greater than two step lengths, the frequency of collisions and walking speed were weakly affected or unaffected. We conclude that visual information from at least two step lengths ahead is needed to guide foot placement when walking over complex terrain. When placed in the context of recent research on the biomechanics of walking, the findings suggest that two step lengths of visual information may be needed because it allows walkers to exploit the passive mechanical forces inherent to bipedal locomotion, thereby avoiding obstacles while maximizing energetic efficiency. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Fusiform Gyrus Dysfunction is Associated with Perceptual Processing Efficiency to Emotional Faces in Adolescent Depression: A Model-Based Approach.

    PubMed

    Ho, Tiffany C; Zhang, Shunan; Sacchet, Matthew D; Weng, Helen; Connolly, Colm G; Henje Blom, Eva; Han, Laura K M; Mobayed, Nisreen O; Yang, Tony T

    2016-01-01

    While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed.

  17. Fusiform Gyrus Dysfunction is Associated with Perceptual Processing Efficiency to Emotional Faces in Adolescent Depression: A Model-Based Approach

    PubMed Central

    Ho, Tiffany C.; Zhang, Shunan; Sacchet, Matthew D.; Weng, Helen; Connolly, Colm G.; Henje Blom, Eva; Han, Laura K. M.; Mobayed, Nisreen O.; Yang, Tony T.

    2016-01-01

    While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed. PMID:26869950

  18. Selective weighting of action-related feature dimensions in visual working memory.

    PubMed

    Heuer, Anna; Schubö, Anna

    2017-08-01

    Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.

  19. Stimulus similarity determines the prevalence of behavioral laterality in a visual discrimination task for mice

    PubMed Central

    Treviño, Mario

    2014-01-01

    Animal choices depend on direct sensory information, but also on the dynamic changes in the magnitude of reward. In visual discrimination tasks, the emergence of lateral biases in the choice record from animals is often described as a behavioral artifact, because these are highly correlated with error rates affecting psychophysical measurements. Here, we hypothesized that biased choices could constitute a robust behavioral strategy to solve discrimination tasks of graded difficulty. We trained mice to swim in a two-alterative visual discrimination task with escape from water as the reward. Their prevalence of making lateral choices increased with stimulus similarity and was present in conditions of high discriminability. While lateralization occurred at the individual level, it was absent, on average, at the population level. Biased choice sequences obeyed the generalized matching law and increased task efficiency when stimulus similarity was high. A mathematical analysis revealed that strongly-biased mice used information from past rewards but not past choices to make their current choices. We also found that the amount of lateralized choices made during the first day of training predicted individual differences in the average learning behavior. This framework provides useful analysis tools to study individualized visual-learning trajectories in mice. PMID:25524257

  20. Digital imaging information technology for biospeckle activity assessment relative to bacteria and parasites.

    PubMed

    Ramírez-Miquet, Evelio E; Cabrera, Humberto; Grassi, Hilda C; de J Andrades, Efrén; Otero, Isabel; Rodríguez, Dania; Darias, Juan G

    2017-08-01

    This paper reports on the biospeckle processing of biological activity using a visualization scheme based upon the digital imaging information technology. Activity relative to bacterial growth in agar plates and to parasites affected by a drug is monitored via the speckle patterns generated by a coherent source incident on the microorganisms. We present experimental results to demonstrate the potential application of this methodology for following the activity in time. The digital imaging information technology is an alternative visualization enabling the study of speckle dynamics, which is correlated to the activity of bacteria and parasites. In this method, the changes in Red-Green-Blue (RGB) color component density are considered as markers of the growth of bacteria and parasites motility in presence of a drug. The RGB data was used to generate a two-dimensional surface plot allowing an analysis of color distribution on the speckle images. The proposed visualization is compared to the outcomes of the generalized differences and the temporal difference. A quantification of the activity is performed using a parameterization of the temporal difference method. The adopted digital image processing technique has been found suitable to monitor motility and morphological changes in the bacterial population over time and to detect and distinguish a short term drug action on parasites.

  1. Metabolic rate and body size are linked with perception of temporal information☆

    PubMed Central

    Healy, Kevin; McNally, Luke; Ruxton, Graeme D.; Cooper, Natalie; Jackson, Andrew L.

    2013-01-01

    Body size and metabolic rate both fundamentally constrain how species interact with their environment, and hence ultimately affect their niche. While many mechanisms leading to these constraints have been explored, their effects on the resolution at which temporal information is perceived have been largely overlooked. The visual system acts as a gateway to the dynamic environment and the relative resolution at which organisms are able to acquire and process visual information is likely to restrict their ability to interact with events around them. As both smaller size and higher metabolic rates should facilitate rapid behavioural responses, we hypothesized that these traits would favour perception of temporal change over finer timescales. Using critical flicker fusion frequency, the lowest frequency of flashing at which a flickering light source is perceived as constant, as a measure of the maximum rate of temporal information processing in the visual system, we carried out a phylogenetic comparative analysis of a wide range of vertebrates that supported this hypothesis. Our results have implications for the evolution of signalling systems and predator–prey interactions, and, combined with the strong influence that both body mass and metabolism have on a species' ecological niche, suggest that time perception may constitute an important and overlooked dimension of niche differentiation. PMID:24109147

  2. Visual, Motor, and Visual-Motor Integration Difficulties in Students with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Oliver, Kimberly

    2013-01-01

    Autism spectrum disorders (ASDs) affect 1 in every 88 U.S. children. ASDs have been described as neurological and developmental disorders impacting visual, motor, and visual-motor integration (VMI) abilities that affect academic achievement (CDC, 2010). Forty-five participants (22 ASD and 23 Typically Developing [TD]) 8 to 14 years old completed…

  3. Sleep deprivation affects sensorimotor coupling in postural control of young adults.

    PubMed

    Aguiar, Stefane A; Barela, José A

    2014-06-27

    Although impairments in postural control have been reported due to sleep deprivation, the mechanisms underlying such performance decrements still need to be uncovered. The purpose of this study was to investigate the effects of sleep deprivation on the relationship between visual information and body sway in young adults' postural control. Thirty adults who remained awake during one night and 30 adults who slept normally the night before the experiment participated in this study. The moving room paradigm was utilized, manipulating visual information through the movement of a room while the floor remained motionless. Subjects stood upright inside of a moving room during four 60-s trials. In the first trial the room was kept stationary and in the following trials the room moved with a frequency of 0.2Hz, peak velocity of 0.6cm/s and 0.9cm peak-to-peak amplitude. Body sway and room displacement were measured through infrared markers. Results showed larger and faster body sway in sleep deprived subjects with and without visual manipulation. The magnitude with which visual stimulus influenced body sway and its temporal relationship were unaltered in sleep deprived individuals, but they became less coherent and more variable as they had to maintain upright stance during trials. These results indicate that after sleep deprivation adults become less stable and accurate in relating visual information to motor action, and this effect is observed after only a brief period performing postural tasks. The low cognitive load employed in this task suggests that attentional difficulties are not the only factor leading to sensorimotor coupling impairments observed following sleep deprivation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Driver Distraction Using Visual-Based Sensors and Algorithms.

    PubMed

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-10-28

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.

  5. Driver Distraction Using Visual-Based Sensors and Algorithms

    PubMed Central

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-01-01

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. PMID:27801822

  6. Assessing older adults' perceptions of sensor data and designing visual displays for ambient environments. An exploratory study.

    PubMed

    Reeder, B; Chung, J; Le, T; Thompson, H; Demiris, G

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records". Our objectives were to: 1) characterize older adult participants' perceived usefulness of in-home sensor data and 2) develop novel visual displays for sensor data from Ambient Assisted Living environments that can become part of electronic health records. Semi-structured interviews were conducted with community-dwelling older adult participants during three and six-month visits. We engaged participants in two design iterations by soliciting feedback about display types and visual displays of simulated data related to a fall scenario. Interview transcripts were analyzed to identify themes related to perceived usefulness of sensor data. Thematic analysis identified three themes: perceived usefulness of sensor data for managing health; factors that affect perceived usefulness of sensor data and; perceived usefulness of visual displays. Visual displays were cited as potentially useful for family members and health care providers. Three novel visual displays were created based on interview results, design guidelines derived from prior AAL research, and principles of graphic design theory. Participants identified potential uses of personal activity data for monitoring health status and capturing early signs of illness. One area for future research is to determine how visual displays of AAL data might be utilized to connect family members and health care providers through shared understanding of activity levels versus a more simplified view of self-management. Connecting informal and formal caregiving networks may facilitate better communication between older adults, family members and health care providers for shared decision-making.

  7. Eccentricity effects in vision and attention.

    PubMed

    Staugaard, Camilla Funch; Petersen, Anders; Vangkilde, Signe

    2016-11-01

    Stimulus eccentricity affects visual processing in multiple ways. Performance on a visual task is often better when target stimuli are presented near or at the fovea compared to the retinal periphery. For instance, reaction times and error rates are often reported to increase with increasing eccentricity. Such findings have been interpreted as purely visual, reflecting neurophysiological differences in central and peripheral vision, as well as attentional, reflecting a central bias in the allocation of attentional resources. Other findings indicate that in some cases, information from the periphery is preferentially processed. Specifically, it has been suggested that visual processing speed increases with increasing stimulus eccentricity, and that this positive correlation is reduced, but not eliminated, when the amount of cortex activated by a stimulus is kept constant by magnifying peripheral stimuli (Carrasco et al., 2003). In this study, we investigated effects of eccentricity on visual attentional capacity with and without magnification, using computational modeling based on Bundesen's (1990) theory of visual attention. Our results suggest a general decrease in attentional capacity with increasing stimulus eccentricity, irrespective of magnification. We discuss these results in relation to the physiology of the visual system, the use of different paradigms for investigating visual perception across the visual field, and the use of different stimulus materials (e.g. Gabor patches vs. letters). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Spatial attention enhances the selective integration of activity from area MT.

    PubMed

    Masse, Nicolas Y; Herrington, Todd M; Cook, Erik P

    2012-09-01

    Distinguishing which of the many proposed neural mechanisms of spatial attention actually underlies behavioral improvements in visually guided tasks has been difficult. One attractive hypothesis is that attention allows downstream neural circuits to selectively integrate responses from the most informative sensory neurons. This would allow behavioral performance to be based on the highest-quality signals available in visual cortex. We examined this hypothesis by asking how spatial attention affects both the stimulus sensitivity of middle temporal (MT) neurons and their corresponding correlation with behavior. Analyzing a data set pooled from two experiments involving four monkeys, we found that spatial attention did not appreciably affect either the stimulus sensitivity of the neurons or the correlation between their activity and behavior. However, for those sessions in which there was a robust behavioral effect of attention, focusing attention inside the neuron's receptive field significantly increased the correlation between these two metrics, an indication of selective integration. These results suggest that, similar to mechanisms proposed for the neural basis of perceptual learning, the behavioral benefits of focusing spatial attention are attributable to selective integration of neural activity from visual cortical areas by their downstream targets.

  9. Spatial gradient for unique-feature detection in patients with unilateral neglect: evidence from auditory and visual search.

    PubMed

    Eramudugolla, Ranmalee; Mattingley, Jason B

    2008-01-01

    Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.

  10. Effects of lifetime occupational pesticide exposure on postural control among farmworkers and non-farmworkers

    PubMed Central

    Sunwook, Kim; Nussbaum, Maury A.; Quandt, Sara A.; Laurienti, Paul J.; Arcury, Thomas A.

    2015-01-01

    Objective Assess potential chronic effects of pesticide exposure on postural control, by examining postural balance of farmworkers and non-farmworkers diverse self-reported lifetime exposures. Methods Balance was assessed during quiet upright stance under four experimental conditions (2 visual × 2 cognitive difficulty). Results Significant differences in baseline balance performance (eyes open without cognitive task) between occupational groups were apparent in postural sway complexity. When adding a cognitive task to the eyes open condition, the influence of lifetime exposure on complexity ratios appeared different between occupational groups. Removing visual information revealed a negative association of lifetime exposure with complexity ratios. Conclusions Farmworkers and non-farmworkers may use different postural control strategies even when controlling for the level of lifetime pesticide exposure. Long-term exposure can affect somatosensory/vestibular sensory systems and the central processing of sensory information for postural control. PMID:26849257

  11. Memory for Details with Self-Referencing

    PubMed Central

    Serbun, Sarah J.; Shih, Joanne Y.; Gutchess, Angela H.

    2011-01-01

    Self-referencing benefits item memory, but little is known about the ways in which referencing the self affects memory for details. Experiment 1 assessed whether the effects of self-referencing operate only at the item, or general, level or also enhance memory for specific visual details of objects. Participants incidentally encoded objects by making judgments in reference to the self, a close other (one’s mother), or a familiar other (Bill Clinton). Results indicate that referencing the self or a close other enhances both specific and general memory. Experiments 2 and 3 assessed verbal memory for source in a task that relied on distinguishing between different mental operations (internal sources). Results indicate that self-referencing disproportionately enhances source memory, relative to conditions referencing other people, semantic, or perceptual information. We conclude that self-referencing not only enhances specific memory for both visual and verbal information, but can disproportionately improve memory for specific internal source details as well. PMID:22092106

  12. Maintaining the ties that bind: the role of an intermediate visual memory store in the persistence of awareness.

    PubMed

    Ferber, Susanne; Emrich, Stephen M

    2007-03-01

    Segregation and feature binding are essential to the perception and awareness of objects in a visual scene. When a fragmented line-drawing of an object moves relative to a background of randomly oriented lines, the previously hidden object is segregated from the background and consequently enters awareness. Interestingly, in such shape-from-motion displays, the percept of the object persists briefly when the motion stops, suggesting that the segregated and bound representation of the object is maintained in awareness. Here, we tested whether this persistence effect is mediated by capacity-limited working-memory processes, or by the amount of object-related information available. The experiments demonstrate that persistence is affected mainly by the proportion of object information available and is independent of working-memory limits. We suggest that this persistence effect can be seen as evidence for an intermediate, form-based memory store mediating between sensory and working memory.

  13. Memory for details with self-referencing.

    PubMed

    Serbun, Sarah J; Shih, Joanne Y; Gutchess, Angela H

    2011-11-01

    Self-referencing benefits item memory, but little is known about the ways in which referencing the self affects memory for details. Experiment 1 assessed whether the effects of self-referencing operate only at the item, or general, level or whether they also enhance memory for specific visual details of objects. Participants incidentally encoded objects by making judgements in reference to the self, a close other (one's mother), or a familiar other (Bill Clinton). Results indicate that referencing the self or a close other enhances both specific and general memory. Experiments 2 and 3 assessed verbal memory for source in a task that relied on distinguishing between different mental operations (internal sources). The results indicate that self-referencing disproportionately enhances source memory, relative to conditions referencing other people, semantic, or perceptual information. We conclude that self-referencing not only enhances specific memory for both visual and verbal information, but can also disproportionately improve memory for specific internal source details.

  14. Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory

    NASA Technical Reports Server (NTRS)

    Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.

    2005-01-01

    Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.

  15. Dietary Restriction Affects Neuronal Response Property and GABA Synthesis in the Primary Visual Cortex.

    PubMed

    Yang, Jinfang; Wang, Qian; He, Fenfen; Ding, Yanxia; Sun, Qingyan; Hua, Tianmiao; Xi, Minmin

    2016-01-01

    Previous studies have reported inconsistent effects of dietary restriction (DR) on cortical inhibition. To clarify this issue, we examined the response properties of neurons in the primary visual cortex (V1) of DR and control groups of cats using in vivo extracellular single-unit recording techniques, and assessed the synthesis of inhibitory neurotransmitter GABA in the V1 of cats from both groups using immunohistochemical and Western blot techniques. Our results showed that the response of V1 neurons to visual stimuli was significantly modified by DR, as indicated by an enhanced selectivity for stimulus orientations and motion directions, decreased visually-evoked response, lowered spontaneous activity and increased signal-to-noise ratio in DR cats relative to control cats. Further, it was shown that, accompanied with these changes of neuronal responsiveness, GABA immunoreactivity and the expression of a key GABA-synthesizing enzyme GAD67 in the V1 were significantly increased by DR. These results demonstrate that DR may retard brain aging by increasing the intracortical inhibition effect and improve the function of visual cortical neurons in visual information processing. This DR-induced elevation of cortical inhibition may favor the brain in modulating energy expenditure based on food availability.

  16. Dietary Restriction Affects Neuronal Response Property and GABA Synthesis in the Primary Visual Cortex

    PubMed Central

    Sun, Qingyan; Hua, Tianmiao; Xi, Minmin

    2016-01-01

    Previous studies have reported inconsistent effects of dietary restriction (DR) on cortical inhibition. To clarify this issue, we examined the response properties of neurons in the primary visual cortex (V1) of DR and control groups of cats using in vivo extracellular single-unit recording techniques, and assessed the synthesis of inhibitory neurotransmitter GABA in the V1 of cats from both groups using immunohistochemical and Western blot techniques. Our results showed that the response of V1 neurons to visual stimuli was significantly modified by DR, as indicated by an enhanced selectivity for stimulus orientations and motion directions, decreased visually-evoked response, lowered spontaneous activity and increased signal-to-noise ratio in DR cats relative to control cats. Further, it was shown that, accompanied with these changes of neuronal responsiveness, GABA immunoreactivity and the expression of a key GABA-synthesizing enzyme GAD67 in the V1 were significantly increased by DR. These results demonstrate that DR may retard brain aging by increasing the intracortical inhibition effect and improve the function of visual cortical neurons in visual information processing. This DR-induced elevation of cortical inhibition may favor the brain in modulating energy expenditure based on food availability. PMID:26863207

  17. Parallel neural pathways in higher visual centers of the Drosophila brain that mediate wavelength-specific behavior

    PubMed Central

    Otsuna, Hideo; Shinomiya, Kazunori; Ito, Kei

    2014-01-01

    Compared with connections between the retinae and primary visual centers, relatively less is known in both mammals and insects about the functional segregation of neural pathways connecting primary and higher centers of the visual processing cascade. Here, using the Drosophila visual system as a model, we demonstrate two levels of parallel computation in the pathways that connect primary visual centers of the optic lobe to computational circuits embedded within deeper centers in the central brain. We show that a seemingly simple achromatic behavior, namely phototaxis, is under the control of several independent pathways, each of which is responsible for navigation towards unique wavelengths. Silencing just one pathway is enough to disturb phototaxis towards one characteristic monochromatic source, whereas phototactic behavior towards white light is not affected. The response spectrum of each demonstrable pathway is different from that of individual photoreceptors, suggesting subtractive computations. A choice assay between two colors showed that these pathways are responsible for navigation towards, but not for the detection itself of, the monochromatic light. The present study provides novel insights about how visual information is separated and processed in parallel to achieve robust control of an innate behavior. PMID:24574974

  18. Does constraining memory maintenance reduce visual search efficiency?

    PubMed

    Buttaccio, Daniel R; Lange, Nicholas D; Thomas, Rick P; Dougherty, Michael R

    2018-03-01

    We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.

  19. Dual-task interference in visual working memory: A limitation in storage capacity but not in encoding or retrieval

    PubMed Central

    Fougnie, Daryl; Marois, René

    2009-01-01

    The concurrent maintenance of two visual working memory (VWM) arrays can lead to profound interference. It is unclear, however, whether these costs arise from limitations in VWM storage capacity (Fougnie & Marois, 2006), or from interference between the storage of one visual array and encoding or retrieval of another visual array (Cowan & Morey, 2007). Here, we show that encoding a VWM array does not interfere with maintenance of another VWM array unless the two displays exceed maintenance capacity (Experiments 1 and 2). Moreover, manipulating the extent to which encoding and maintenance can interfere with one another had no discernable effect on dual-task performance (Experiment 2). Finally, maintenance of a VWM array was not affected by retrieval of information from another VWM array (Experiment 3). Taken together, these findings demonstrate that dual-task interference between two concurrent VWM tasks is due to a capacity-limited store that is independent from encoding and retrieval processes. PMID:19933566

  20. Stable statistical representations facilitate visual search.

    PubMed

    Corbett, Jennifer E; Melcher, David

    2014-10-01

    Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability.

  1. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  2. Changes in visual perspective influence brain activity patterns during cognitive perspective-taking of other people's pain.

    PubMed

    Vistoli, Damien; Achim, Amélie M; Lavoie, Marie-Audrey; Jackson, Philip L

    2016-05-01

    Empathy refers to our capacity to share and understand the emotional states of others. It relies on two main processes according to existing models: an effortless affective sharing process based on neural resonance and a more effortful cognitive perspective-taking process enabling the ability to imagine and understand how others feel in specific situations. Until now, studies have focused on factors influencing the affective sharing process but little is known about those influencing the cognitive perspective-taking process and the related brain activations during vicarious pain. In the present fMRI study, we used the well-known physical pain observation task to examine whether the visual perspective can influence, in a bottom-up way, the brain regions involved in taking others' cognitive perspective to attribute their level of pain. We used a pseudo-dynamic version of this classic task which features hands in painful or neutral daily life situations while orthogonally manipulating: (1) the visual perspective with which hands were presented (first-person versus third-person conditions) and (2) the explicit instructions to imagine oneself or an unknown person in those situations (Self versus Other conditions). The cognitive perspective-taking process was investigated by comparing Other and Self conditions. When examined across both visual perspectives, this comparison showed no supra-threshold activation. Instead, the Other versus Self comparison led to a specific recruitment of the bilateral temporo-parietal junction when hands were presented according to a first-person (but not third-person) visual perspective. The present findings identify the visual perspective as a factor that modulates the neural activations related to cognitive perspective-taking during vicarious pain and show that this complex cognitive process can be influenced by perceptual stages of information processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Profiling Oman education data using data visualization technique

    NASA Astrophysics Data System (ADS)

    Alalawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd

    2016-10-01

    This research works presents an innovative data visualization technique to understand and visualize the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". The Ministry of Education in Sultanate of Oman have huge databases contains massive information. The volume of data in the database increase yearly as many students, teachers and employees enter into the database. The task for discovering and analyzing these vast volumes of data becomes increasingly difficult. Information visualization and data mining offer a better ways in dealing with large volume of information. In this paper, an innovative information visualization technique is developed to visualize the complex multidimensional educational data. Microsoft Excel Dashboard, Visual Basic Application (VBA) and Pivot Table are utilized to visualize the data. Findings from the summarization of the data are presented, and it is argued that information visualization can help related stakeholders to become aware of hidden and interesting information from large amount of data drowning in their educational portal.

  4. Conservation implications of anthropogenic impacts on visual communication and camouflage.

    PubMed

    Delhey, Kaspar; Peters, Anne

    2017-02-01

    Anthropogenic environmental impacts can disrupt the sensory environment of animals and affect important processes from mate choice to predator avoidance. Currently, these effects are best understood for auditory and chemosensory modalities, and recent reviews highlight their importance for conservation. We examined how anthropogenic changes to the visual environment (ambient light, transmission, and backgrounds) affect visual communication and camouflage and considered the implications of these effects for conservation. Human changes to the visual environment can increase predation risk by affecting camouflage effectiveness, lead to maladaptive patterns of mate choice, and disrupt mutualistic interactions between pollinators and plants. Implications for conservation are particularly evident for disrupted camouflage due to its tight links with survival. The conservation importance of impaired visual communication is less documented. The effects of anthropogenic changes on visual communication and camouflage may be severe when they affect critical processes such as pollination or species recognition. However, when impaired mate choice does not lead to hybridization, the conservation consequences are less clear. We suggest that the demographic effects of human impacts on visual communication and camouflage will be particularly strong when human-induced modifications to the visual environment are evolutionarily novel (i.e., very different from natural variation); affected species and populations have low levels of intraspecific (genotypic and phenotypic) variation and behavioral, sensory, or physiological plasticity; and the processes affected are directly related to survival (camouflage), species recognition, or number of offspring produced, rather than offspring quality or attractiveness. Our findings suggest that anthropogenic effects on the visual environment may be of similar importance relative to conservation as anthropogenic effects on other sensory modalities. © 2016 Society for Conservation Biology.

  5. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    PubMed

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  6. Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome

    PubMed Central

    Hänel, Claudia; Pieperhoff, Peter; Hentschel, Bernd; Amunts, Katrin; Kuhlen, Torsten

    2014-01-01

    The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions. PMID:24847243

  7. Task alters category representations in prefrontal but not high-level visual cortex.

    PubMed

    Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit

    2017-07-15

    A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Flow, affect and visual creativity.

    PubMed

    Cseh, Genevieve M; Phillips, Louise H; Pearson, David G

    2015-01-01

    Flow (being in the zone) is purported to have positive consequences in terms of affect and performance; however, there is no empirical evidence about these links in visual creativity. Positive affect often--but inconsistently--facilitates creativity, and both may be linked to experiencing flow. This study aimed to determine relationships between these variables within visual creativity. Participants performed the creative mental synthesis task to simulate the creative process. Affect change (pre- vs. post-task) and flow were measured via questionnaires. The creativity of synthesis drawings was rated objectively and subjectively by judges. Findings empirically demonstrate that flow is related to affect improvement during visual creativity. Affect change was linked to productivity and self-rated creativity, but no other objective or subjective performance measures. Flow was unrelated to all external performance measures but was highly correlated with self-rated creativity; flow may therefore motivate perseverance towards eventual excellence rather than provide direct cognitive enhancement.

  9. Contribution of Visual Information about Ball Trajectory to Baseball Hitting Accuracy

    PubMed Central

    Higuchi, Takatoshi; Nagami, Tomoyuki; Nakata, Hiroki; Watanabe, Masakazu; Isaka, Tadao; Kanosue, Kazuyuki

    2016-01-01

    The contribution of visual information about a pitched ball to the accuracy of baseball-bat contact may vary depending on the part of trajectory seen. The purpose of the present study was to examine the relationship between hitting accuracy and the segment of the trajectory of the flying ball that can be seen by the batter. Ten college baseball field players participated in the study. The systematic error and standardized variability of ball-bat contact on the bat coordinate system and pitcher-to-catcher direction when hitting a ball launched from a pitching machine were measured with or without visual occlusion and analyzed using analysis of variance. The visual occlusion timing included occlusion from 150 milliseconds (ms) after the ball release (R+150), occlusion from 150 ms before the expected arrival of the launched ball at the home plate (A-150), and a condition with no occlusion (NO). Twelve trials in each condition were performed using two ball speeds (31.9 m·s-1 and 40.3 m·s-1). Visual occlusion did not affect the mean location of ball-bat contact in the bat’s long axis, short axis, and pitcher-to-catcher directions. Although the magnitude of standardized variability was significantly smaller in the bat’s short axis direction than in the bat’s long axis and pitcher-to-catcher directions (p < 0.001), additional visible time from the R+150 condition to the A-150 and NO conditions resulted in a further decrease in standardized variability only in the bat’s short axis direction (p < 0.05). The results suggested that there is directional specificity in the magnitude of standardized variability with different visible time. The present study also confirmed the limitation to visual information is the later part of the ball trajectory for improving hitting accuracy, which is likely due to visuo-motor delay. PMID:26848742

  10. Contribution of Visual Information about Ball Trajectory to Baseball Hitting Accuracy.

    PubMed

    Higuchi, Takatoshi; Nagami, Tomoyuki; Nakata, Hiroki; Watanabe, Masakazu; Isaka, Tadao; Kanosue, Kazuyuki

    2016-01-01

    The contribution of visual information about a pitched ball to the accuracy of baseball-bat contact may vary depending on the part of trajectory seen. The purpose of the present study was to examine the relationship between hitting accuracy and the segment of the trajectory of the flying ball that can be seen by the batter. Ten college baseball field players participated in the study. The systematic error and standardized variability of ball-bat contact on the bat coordinate system and pitcher-to-catcher direction when hitting a ball launched from a pitching machine were measured with or without visual occlusion and analyzed using analysis of variance. The visual occlusion timing included occlusion from 150 milliseconds (ms) after the ball release (R+150), occlusion from 150 ms before the expected arrival of the launched ball at the home plate (A-150), and a condition with no occlusion (NO). Twelve trials in each condition were performed using two ball speeds (31.9 m·s-1 and 40.3 m·s-1). Visual occlusion did not affect the mean location of ball-bat contact in the bat's long axis, short axis, and pitcher-to-catcher directions. Although the magnitude of standardized variability was significantly smaller in the bat's short axis direction than in the bat's long axis and pitcher-to-catcher directions (p < 0.001), additional visible time from the R+150 condition to the A-150 and NO conditions resulted in a further decrease in standardized variability only in the bat's short axis direction (p < 0.05). The results suggested that there is directional specificity in the magnitude of standardized variability with different visible time. The present study also confirmed the limitation to visual information is the later part of the ball trajectory for improving hitting accuracy, which is likely due to visuo-motor delay.

  11. Affective Education for Visually Impaired Children.

    ERIC Educational Resources Information Center

    Locke, Don C.; Gerler, Edwin R., Jr.

    1981-01-01

    Evaluated the effectiveness of the Human Development Program (HDP) and the Developing Understanding of Self and Others (DUSO) program used with visually impaired children. Although HDP and DUSO affected the behavior of visually impaired children, they did not have any effect on children's attitudes toward school. (RC)

  12. Qualitative assessment of a Context of Consumption Framework to inform regulation of cigarette pack design in the U.S.

    PubMed

    Lee, Joseph G L; Averett, Paige E; Blanchflower, Tiffany; Gregory, Kyle R

    2018-02-01

    Researchers and regulators need to know how changes to cigarette packages can influence population health. We sought to advance research on the role of cigarette packaging by assessing a theory-informed framework from the fields of design and consumer research. The selected Context of Consumption Framework posits cognitive, affective, and behavioral responses to visual design. To assess the Framework's potential for guiding research on the visual design of cigarette packaging in the U.S., this study seeks to understand to what extent the Context of Consumption Framework converges with how adult smokers think and talk about cigarette pack designs. Data for this qualitative study came from six telephone-based focus groups conducted in March 2017. Two groups consisted of lesbian, gay, and bisexual participants; two groups of participants with less than four years college education; one group of LGB and straight identity; and one group the general population. All groups were selected for regional, gender, and racial/ethnic diversity. Participants (n=33) represented all nine U.S. Census divisions. We conducted a deductive qualitative analysis. Cigarette package designs captured the participants' attention, suggested the characteristics of the product, and reflected (or could be leveraged to convey) multiple dimensions of consumer identity. Particular to the affective responses to design, our participants shared that cigarette packaging conveyed how the pack could be used to particular ends, created an emotional response to the designs, complied with normative expectations of a cigarette, elicited interest when designs change, and prompted fascination when unique design characteristics are used. Use of the Context of Consumption Framework for cigarette product packaging design can inform regulatory research on tobacco product packaging. Researchers and regulators should consider multiple cognitive, affective, and behavioral responses to cigarette pack design.

  13. The frequency and severity of extinction after stroke affecting different vascular territories.

    PubMed

    Chechlacz, Magdalena; Rotshtein, Pia; Demeyere, Nele; Bickerton, Wai-Ling; Humphreys, Glyn W

    2014-02-01

    We examined the frequency and severity of visual versus tactile extinction based on data from a large group of sub-acute patients (n=454) with strokes affecting different vascular territories. After right hemisphere damage visual and tactile extinction were equally common. However, after left hemisphere damage tactile extinction was more common than visual. The frequency of extinction was significantly higher in patients with right compared to left hemisphere damage in both visual and tactile modalities but this held only for strokes affecting the MCA and PCA territories and not for strokes affecting other vascular territories. Furthermore, the severity of extinction did not differ as a function of either the stimulus modality (visual versus tactile), the affected hemisphere (left versus right) or the stroke territory (MCA, PCA or other vascular territories). We conclude that the frequency but not severity of extinction in both modalities relates to the side of damage (i.e. left versus right hemisphere) and the vascular territories affected by the stroke, and that left hemisphere dominance for motor control may link to the greater incidence of tactile than visual extinction after left hemisphere stroke. We discuss the implications of our findings for understanding hemispheric lateralization within visuospatial attention networks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    PubMed Central

    Fengler, Ineke; Nava, Elena; Röder, Brigitte

    2015-01-01

    Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166

  15. Visual flight control in naturalistic and artificial environments.

    PubMed

    Baird, Emily; Dacke, Marie

    2012-12-01

    Although the visual flight control strategies of flying insects have evolved to cope with the complexity of the natural world, studies investigating this behaviour have typically been performed indoors using simplified two-dimensional artificial visual stimuli. How well do the results from these studies reflect the natural behaviour of flying insects considering the radical differences in contrast, spatial composition, colour and dimensionality between these visual environments? Here, we aim to answer this question by investigating the effect of three- and two-dimensional naturalistic and artificial scenes on bumblebee flight control in an outdoor setting and compare the results with those of similar experiments performed in an indoor setting. In particular, we focus on investigating the effect of axial (front-to-back) visual motion cues on ground speed and centring behaviour. Our results suggest that, in general, ground speed control and centring behaviour in bumblebees is not affected by whether the visual scene is two- or three dimensional, naturalistic or artificial, or whether the experiment is conducted indoors or outdoors. The only effect that we observe between naturalistic and artificial scenes on flight control is that when the visual scene is three-dimensional and the visual information on the floor is minimised, bumblebees fly further from the midline of the tunnel. The findings presented here have implications not only for understanding the mechanisms of visual flight control in bumblebees, but also for the results of past and future investigations into visually guided flight control in other insects.

  16. Before the N400: effects of lexical-semantic violations in visual cortex.

    PubMed

    Dikker, Suzanne; Pylkkanen, Liina

    2011-07-01

    There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical-semantic information might similarly generate early visual effects. In a picture-noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical-semantic predictions can affect early visual processing around ∼100ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical-semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical-semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical-semantic effects. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. 32 CFR 811.8 - Forms prescribed and availability of publications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.8 Forms prescribed and availability of publications. (a) AF Form 833, Visual Information Request, AF Form 1340, Visual Information Support Center Workload Report, DD Form 1995, Visual Information (VI) Production...

  18. Can understanding the neurobiology of body dysmorphic disorder (BDD) inform treatment?

    PubMed

    Rossell, Susan L; Harrison, Ben J; Castle, David

    2015-08-01

    We aim to provide a clinically focused review of the neurobiological literature in body dysmorphic disorder (BDD), with a focus on structural and functional neuroimaging. There has been a recent influx of studies examining the underlying neurobiology of BDD using structural and functional neuroimaging methods. Despite obvious symptom similarities with obsessive-compulsive disorder (OCD), no study to date has directly compared the two groups using neuroimaging techniques. Studies have established that there are limbic and visual cortex abnormalities in BDD, in contrast to fronto-striatal differences in OCD. Such data suggests affect or visual training maybe useful in BDD. © The Royal Australian and New Zealand College of Psychiatrists 2015.

  19. Visualizing spatiotemporal pulse propagation: first-order spatiotemporal couplings in laser pulses.

    PubMed

    Rhodes, Michelle; Guang, Zhe; Pease, Jerrold; Trebino, Rick

    2017-04-10

    Even though a general theory of first-order spatiotemporal couplings exists in the literature, it is often difficult to visualize how these distortions affect laser pulses. In particular, it is difficult to show the spatiotemporal phase of pulses in a meaningful way. Here, we propose a general solution to plotting the electric fields of pulses in three-dimensional space that intuitively shows the effects of spatiotemporal phases. The temporal phase information is color-coded using spectrograms and color response functions, and the beam is propagated to show the spatial phase evolution. Using this plotting technique, we generate two- and three-dimensional images and movies that show the effects of spatiotemporal couplings.

  20. Visualizing spatiotemporal pulse propagation: first-order spatiotemporal couplings in laser pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, Michelle; Guang, Zhe; Pease, Jerrold

    2017-04-06

    Even though a general theory of first-order spatiotemporal couplings exists in the literature, it is often difficult to visualize how these distortions affect laser pulses. In particular, it is difficult to show the spatiotemporal phase of pulses in a meaningful way. We propose a general solution to plotting the electric fields of pulses in three-dimensional space that intuitively shows the effects of spatiotemporal phases. The temporal phase information is color-coded using spectrograms and color response functions, and the beam is propagated to show the spatial phase evolution. In using this plotting technique, we generate two- and three-dimensional images and moviesmore » that show the effects of spatiotemporal couplings.« less

  1. Information efficiency in visual communication

    NASA Astrophysics Data System (ADS)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  2. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  3. Effectiveness of Visual Methods in Information Procedures for Stem Cell Recipients and Donors

    PubMed Central

    Sarıtürk, Çağla; Gereklioğlu, Çiğdem; Korur, Aslı; Asma, Süheyl; Yeral, Mahmut; Solmaz, Soner; Büyükkurt, Nurhilal; Tepebaşı, Songül; Kozanoğlu, İlknur; Boğa, Can; Özdoğu, Hakan

    2017-01-01

    Objective: Obtaining informed consent from hematopoietic stem cell recipients and donors is a critical step in the transplantation process. Anxiety may affect their understanding of the provided information. However, use of audiovisual methods may facilitate understanding. In this prospective randomized study, we investigated the effectiveness of using an audiovisual method of providing information to patients and donors in combination with the standard model. Materials and Methods: A 10-min informational animation was prepared for this purpose. In total, 82 participants were randomly assigned to two groups: group 1 received the additional audiovisual information and group 2 received standard information. A 20-item questionnaire was administered to participants at the end of the informational session. Results: A reliability test and factor analysis showed that the questionnaire was reliable and valid. For all participants, the mean overall satisfaction score was 184.8±19.8 (maximum possible score of 200). However, for satisfaction with information about written informed consent, group 1 scored significantly higher than group 2 (p=0.039). Satisfaction level was not affected by age, education level, or differences between the physicians conducting the informative session. Conclusion: This study shows that using audiovisual tools may contribute to a better understanding of the informed consent procedure and potential risks of stem cell transplantation. PMID:27476890

  4. 32 CFR 811.3 - Official requests for visual information productions or materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... THE AIR FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.3 Official requests for visual information productions or materials. (a) Send official Air Force... 32 National Defense 6 2010-07-01 2010-07-01 false Official requests for visual information...

  5. 32 CFR 811.4 - Selling visual information materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.4 Selling visual information materials. (a) Air Force VI activities cannot sell materials. (b) HQ AFCIC/ITSM may approve the... 32 National Defense 6 2010-07-01 2010-07-01 false Selling visual information materials. 811.4...

  6. Flies and humans share a motion estimation strategy that exploits natural scene statistics

    PubMed Central

    Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.

    2014-01-01

    Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225

  7. The Vividness of Happiness in Dynamic Facial Displays of Emotion

    PubMed Central

    Becker, D. Vaughn; Neel, Rebecca; Srinivasan, Narayanan; Neufeld, Samantha; Kumar, Devpriya; Fouse, Shannon

    2012-01-01

    Rapid identification of facial expressions can profoundly affect social interactions, yet most research to date has focused on static rather than dynamic expressions. In four experiments, we show that when a non-expressive face becomes expressive, happiness is detected more rapidly anger. When the change occurs peripheral to the focus of attention, however, dynamic anger is better detected when it appears in the left visual field (LVF), whereas dynamic happiness is better detected in the right visual field (RVF), consistent with hemispheric differences in the processing of approach- and avoidance-relevant stimuli. The central advantage for happiness is nevertheless the more robust effect, persisting even when information of either high or low spatial frequency is eliminated. Indeed, a survey of past research on the visual search for emotional expressions finds better support for a happiness detection advantage, and the explanation may lie in the coevolution of the signal and the receiver. PMID:22247755

  8. Recognition intent and visual word recognition.

    PubMed

    Wang, Man-Ying; Ching, Chi-Le

    2009-03-01

    This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.

  9. The consequence of spatial visual processing dysfunction caused by traumatic brain injury (TBI).

    PubMed

    Padula, William V; Capo-Aponte, Jose E; Padula, William V; Singman, Eric L; Jenness, Jonathan

    2017-01-01

    A bi-modal visual processing model is supported by research to affect dysfunction following a traumatic brain injury (TBI). TBI causes dysfunction of visual processing affecting binocularity, spatial orientation, posture and balance. Research demonstrates that prescription of prisms influence the plasticity between spatial visual processing and motor-sensory systems improving visual processing and reducing symptoms following a TBI. The rationale demonstrates that visual processing underlies the functional aspects of binocularity, balance and posture. The bi-modal visual process maintains plasticity for efficiency. Compromise causes Post Trauma Vision Syndrome (PTVS) and Visual Midline Shift Syndrome (VMSS). Rehabilitation through use of lenses, prisms and sectoral occlusion has inter-professional implications in rehabilitation affecting the plasticity of the bi-modal visual process, thereby improving binocularity, spatial orientation, posture and balance Main outcomes: This review provides an opportunity to create a new perspective of the consequences of TBI on visual processing and the symptoms that are often caused by trauma. It also serves to provide a perspective of visual processing dysfunction that has potential for developing new approaches of rehabilitation. Understanding vision as a bi-modal process facilitates a new perspective of visual processing and the potentials for rehabilitation following a concussion, brain injury or other neurological events.

  10. VisGets: coordinated visualizations for web-based information exploration and discovery.

    PubMed

    Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey

    2008-01-01

    In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

  11. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  12. Constructing and Reading Visual Information: Visual Literacy for Library and Information Science Education

    ERIC Educational Resources Information Center

    Ma, Yan

    2015-01-01

    This article examines visual literacy education and research for library and information science profession to educate the information professionals who will be able to execute and implement the ACRL (Association of College and Research Libraries) Visual Literacy Competency Standards successfully. It is a continuing call for inclusion of visual…

  13. Keratopathy in congenital aniridia.

    PubMed

    Mayer, Kristine L; Nordlund, Michael L; Schwartz, Gary S; Holland, Edward J

    2003-04-01

    Although the most apparent clinical finding in aniridia is the absence of iris tissue, additional ocular structures are often affected. Mutations of the Pax 6 gene, which is important for eye development, have been identified in families with members affected by aniridia. Poor vision in aniridic eyes may be the result of macular hypoplasia, nystagmus, amblyopia, cataracts, glaucoma, and corneal disease, termed aniridic keratopathy. Advances in surgical techniques have improved management of some of the visually disabling manifestations of aniridia, but aniridic keratopathy remains a significant source of visual loss. We have conducted a large, retrospective study of patients with aniridia to gain information about the natural course of aniridic keratopathy. In this paper, we report the results of our study, as well as findings reported in the literature. Penetrating keratoplasty alone has not been a successful treatment for severe stromal scarring, as it does not treat the underlying epithelial causes of corneal disease. However, it has been successful in corneas that have achieved stable epithelium following limbal stem cell transplantation.

  14. Pathways for smiling, disgust and fear recognition in blindsight patients.

    PubMed

    Gerbella, Marzio; Caruana, Fausto; Rizzolatti, Giacomo

    2017-08-31

    The aim of the present review is to discuss the localization of circuits that allow recognition of emotional facial expressions in blindsight patients. Because recognition of facial expressions is function of different centers, and their localization is not always clear, we decided to discuss here three emotional facial expression - smiling, disgust, and fear - whose anatomical localization in the pregenual sector of the anterior cingulate cortex (pACC), anterior insula (AI), and amygdala, respectively, is well established. We examined, then, the possible pathways that may convey affective visual information to these centers following lesions of V1. We concluded that the pathway leading to pACC, AI, and amygdala involves the deep layers of the superior colliculus, the medial pulvinar, and the superior temporal sulcus region. We suggest that this visual pathway provides an image of the observed affective faces, which, although deteriorated, is sufficient to determine some overt behavior, but not to provide conscious experience of the presented stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Effects of induced and naturalistic mood on the temporal allocation of attention to emotional information.

    PubMed

    Farach, Frank J; Treat, Teresa A; Jungé, Justin A

    2014-01-01

    Building upon recent findings that affective states can influence the allocation of spatial attention, we investigate how state, trait and induced mood are related to the temporal allocation of attention to emotional information. In the present study, 125 unscreened undergraduates completed a modified rapid serial visual presentation task designed to assess the time course of attention to positive and negative information, comparing a neutral baseline mood induction to either a positive or negative mood induction. Induced negative mood facilitated attentional engagement to positive information while decreasing attentional engagement to negative information. Greater naturally occurring negative state mood was associated with faster or more efficient disengagement of attention from negative information in the presence of manipulated negative mood, relative to baseline. The engagement findings were inconsistent with our mood-congruence hypotheses and may be better explained by mood repair or affective counter-regulation theories. In contrast, the disengagement findings for state mood were somewhat consistent with our mood-congruence hypotheses. The relationship between mood and attention to emotional information may differ depending on the combination of attentional mechanism (engagement versus disengagement), aspect of mood (state, trait or induced), stimulus valence (positive versus negative) and timescale (early versus late) under investigation.

  16. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    PubMed

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.

  17. Development and Evaluation of a Compartmental Picture Archiving and Communications System Model for Integration and Visualization of Multidisciplinary Biomedical Data to Facilitate Student Learning in an Integrative Health Clinic

    ERIC Educational Resources Information Center

    Chow, Meyrick; Chan, Lawrence

    2010-01-01

    Information technology (IT) has the potential to improve the clinical learning environment. The extent to which IT enhances or detracts from healthcare professionals' role performance can be expected to affect both student learning and patient outcomes. This study evaluated nursing students' satisfaction with a novel compartmental Picture…

  18. Visual motion direction is represented in population-level neural response as measured by magnetoencephalography.

    PubMed

    Kaneoke, Y; Urakawa, T; Kakigi, R

    2009-05-19

    We investigated whether direction information is represented in the population-level neural response evoked by the visual motion stimulus, as measured by magnetoencephalography. Coherent motions with varied speed, varied direction, and different coherence level were presented using random dot kinematography. Peak latency of responses to motion onset was inversely related to speed in all directions, as previously reported, but no significant effect of direction on latency changes was identified. Mutual information entropy (IE) calculated using four-direction response data increased significantly (>2.14) after motion onset in 41.3% of response data and maximum IE was distributed at approximately 20 ms after peak response latency. When response waveforms showing significant differences (by multivariate discriminant analysis) in distribution of the three waveform parameters (peak amplitude, peak latency, and 75% waveform width) with stimulus directions were analyzed, 87 waveform stimulus directions (80.6%) were correctly estimated using these parameters. Correct estimation rate was unaffected by stimulus speed, but was affected by coherence level, even though both speed and coherence affected response amplitude similarly. Our results indicate that speed and direction of stimulus motion are represented in the distinct properties of a response waveform, suggesting that the human brain processes speed and direction separately, at least in part.

  19. Students’ Spatial Performance: Cognitive Style and Sex Differences

    NASA Astrophysics Data System (ADS)

    Hanifah, U.; Juniati, D.; Siswono, T. Y. E.

    2018-01-01

    This study aims at describing the students’ spatial abilities based on cognitive styles and sex differences. Spatial abilities in this study include 5 components, namely spatial perception, spatial visualization, mental rotation, spatial relations, and spatial orientation. This research is descriptive research with qualitative approach. The subjects in this research were 4 students of junior high school, there were 1 male FI, 1 male FD, 1 female FI, and 1 female FI. The results showed that there are differences in spatial abilities of the four subjects that are on the components of spatial visualization, mental rotation, and spatial relations. The differences in spatial abilities were found in methods / strategies used by each subject to solve each component problem. The differences in cognitive styles and sex suggested different choice of strategies used to solve problems. The male students imagined the figures but female students needed the media to solve the problem. Besides sex, the cognitive style differences also have an effect on solving a problem. In addition, FI students were not affected by distracting information but FD students could be affected by distracting information. This research was expected to contribute knowledge and insight to the readers, especially for math teachers in terms of the spatial ability of the students so that they can optimize their students’ spatial ability.

  20. Microperimetry in patients with central serous retinopathy.

    PubMed

    Toonen, F; Remky, A; Janssen, V; Wolf, S; Reim, M

    1995-09-01

    In patients with acute central serous retinopathy (CSR), evaluation of visual acuity alone may not represent visual function. In patients with acute CSR, visual function may be disturbed by localized scotomas, distortion, and waviness. For the assessment of localized light sensitivity and stability of fixation, patients with CSR were evaluated by fundus perimetry with a scanning laser ophthalmoscope (SLO 101, Rodenstock Instruments). In all, 21 patients with acute CSR and 19 healthy volunteers were included in the study. Diagnosis of CSR was established by ophthalmoscopy and digital video fluorescein angiography. All patients and volunteers underwent static suprathreshold perimetry with the SLO. Light sensitivity was quantified by presenting stimuli with different light intensities (intensity, 0-27.9 dB above background; size, Goldmann III; wavelength, 633 nm) using an automatic staircase strategy. Stimuli were presented with simultaneous real-time monitoring of the retina. Fixation stability was quantified by measuring the area encompassing 75% of all points of fixation. Light sensitivity was 18-20 dB in affected areas, whereas in healthy eyes and outside the affected area, values of 22-24 dB were obtained. Fixation stability was significantly decreased in the affected eye as compared with normal eyes (33 +/- 12 versus 21 +/- 4 min of arc; P < 0.01). Static perimetry with an SLO is a useful technique for the assessment of localized light sensitivity and fixation stability in patients with macular disease. This technique could provide helpful information in the management of CSR.

  1. Emotion-prints: interaction-driven emotion visualization on multi-touch interfaces

    NASA Astrophysics Data System (ADS)

    Cernea, Daniel; Weber, Christopher; Ebert, Achim; Kerren, Andreas

    2015-01-01

    Emotions are one of the unique aspects of human nature, and sadly at the same time one of the elements that our technological world is failing to capture and consider due to their subtlety and inherent complexity. But with the current dawn of new technologies that enable the interpretation of emotional states based on techniques involving facial expressions, speech and intonation, electrodermal response (EDS) and brain-computer interfaces (BCIs), we are finally able to access real-time user emotions in various system interfaces. In this paper we introduce emotion-prints, an approach for visualizing user emotional valence and arousal in the context of multi-touch systems. Our goal is to offer a standardized technique for representing user affective states in the moment when and at the location where the interaction occurs in order to increase affective self-awareness, support awareness in collaborative and competitive scenarios, and offer a framework for aiding the evaluation of touch applications through emotion visualization. We show that emotion-prints are not only independent of the shape of the graphical objects on the touch display, but also that they can be applied regardless of the acquisition technique used for detecting and interpreting user emotions. Moreover, our representation can encode any affective information that can be decomposed or reduced to Russell's two-dimensional space of valence and arousal. Our approach is enforced by a BCI-based user study and a follow-up discussion of advantages and limitations.

  2. Task-irrelevant distractors in the delay period interfere selectively with visual short-term memory for spatial locations.

    PubMed

    Marini, Francesco; Scott, Jerry; Aron, Adam R; Ester, Edward F

    2017-07-01

    Visual short-term memory (VSTM) enables the representation of information in a readily accessible state. VSTM is typically conceptualized as a form of "active" storage that is resistant to interference or disruption, yet several recent studies have shown that under some circumstances task-irrelevant distractors may indeed disrupt performance. Here, we investigated how task-irrelevant visual distractors affected VSTM by asking whether distractors induce a general loss of remembered information or selectively interfere with memory representations. In a VSTM task, participants recalled the spatial location of a target visual stimulus after a delay in which distractors were presented on 75% of trials. Notably, the distractor's eccentricity always matched the eccentricity of the target, while in the critical conditions the distractor's angular position was shifted either clockwise or counterclockwise relative to the target. We then computed estimates of recall error for both eccentricity and polar angle. A general interference model would predict an effect of distractors on both polar angle and eccentricity errors, while a selective interference model would predict effects of distractors on angle but not on eccentricity errors. Results showed that for stimulus angle there was an increase in the magnitude and variability of recall errors. However, distractors had no effect on estimates of stimulus eccentricity. Our results suggest that distractors selectively interfere with VSTM for spatial locations.

  3. Top-down control of visual perception: attention in natural vision.

    PubMed

    Rolls, Edmund T

    2008-01-01

    Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.

  4. Informing Regional Water-Energy-Food Nexus with System Analysis and Interactive Visualizations

    NASA Astrophysics Data System (ADS)

    Yang, Y. C. E.; Wi, S.

    2016-12-01

    Communicating scientific results to non-technical practitioners is challenging due to their differing interests, concerns and agendas. It is further complicated by the growing number of relevant factors that need to be considered, such as climate change and demographic dynamic. Visualization is an effective method for the scientific community to disseminate results, and it represents an opportunity for the future of water resources systems analysis (WRSA). This study demonstrates an intuitive way to communicate WRSA results to practitioners using interactive web-based visualization tools developed by the JavaScript library: Data-Driven Documents (D3) with a case study in Great Ruaha River of Tanzania. The decreasing trend of streamflow during the last decades in the region highlights the need of assessing the water usage competition between agricultural production, energy generation, and ecosystem service. Our team conduct the advance water resources systems analysis to inform policy that will affect the water-energy-food nexus. Modeling results are presented in the web-based visualization tools and allow non-technical practitioners to brush the graph directly (e. g. Figure 1). The WRSA suggests that no single measure can completely resolve the water competition. A combination of measures, each of which is acceptable from a social and economic perspective, and accepting that zero flows cannot be totally eliminated during dry years in the wetland, are likely to be the best way forward.

  5. Research on robot mobile obstacle avoidance control based on visual information

    NASA Astrophysics Data System (ADS)

    Jin, Jiang

    2018-03-01

    Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.

  6. Face perception is tuned to horizontal orientation in the N170 time window.

    PubMed

    Jacques, Corentin; Schiltz, Christine; Goffaux, Valerie

    2014-02-07

    The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in the visual processing stream face-specific perception is tuned to horizontal information. We measured the behavioral performance and scalp event-related potentials (ERP) when participants viewed upright and inverted images of faces and cars (and natural scenes) that were phase-randomized in a narrow orientation band centered either on vertical or horizontal orientation. For faces, the magnitude of the inversion effect (IE) on behavioral discrimination performance was significantly reduced for horizontally randomized compared to vertically or nonrandomized images, confirming the importance of horizontal information for the recruitment of face-specific processing. Inversion affected the processing of nonrandomized and vertically randomized faces early, in the N170 time window. In contrast, the magnitude of the N170 IE was much smaller for horizontally randomized faces. The present research indicates that the early face-specific neural representations are preferentially tuned to horizontal information and offers new perspectives for a description of the visual information feeding face-specific perception.

  7. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    PubMed

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  8. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    PubMed

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Multisensory emotion perception in congenitally, early, and late deaf CI users

    PubMed Central

    Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525

  10. Multisensory emotion perception in congenitally, early, and late deaf CI users.

    PubMed

    Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

  11. Mental Rotation of Tactical Instruction Displays Affects Information Processing Demand and Execution Accuracy in Basketball.

    PubMed

    Koopmann, Till; Steggemann-Weinrich, Yvonne; Baumeister, Jochen; Krause, Daniel

    2017-09-01

    In sports games, coaches often use tactic boards to present tactical instructions during time-outs (e.g., 20 s to 60 s in basketball). Instructions should be presented in a way that enables fast and errorless information processing for the players. The aim of this study was to test the effect of different orientations of visual tactical displays on observation time and execution performance. High affordances in visual-spatial transformation (e.g., mental rotation processes) might impede information processing and might decrease execution performance with regard to the instructed playing patterns. In a within-subjects design with 1 factor, 10 novice students were instructed with visual tactical instructions of basketball playing patterns with different orientations either showing the playing pattern with low spatial disparity to the players' on-court perspective (basket on top) or upside down (basket on bottom). The self-chosen time for watching the pattern before execution was significantly shorter and spatial accuracy in pattern execution was significantly higher when the instructional perspective and the real perspective on the basketball court had a congruent orientation. The effects might be explained by interfering mental rotation processes that are necessary to transform the instructional perspective into the players' actual perspective while standing on the court or imagining themselves standing on the court. According to these results, coaches should align their tactic boards to their players' on-court viewing perspective.

  12. Motion processing with two eyes in three dimensions.

    PubMed

    Rokers, Bas; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C

    2011-02-11

    The movement of an object toward or away from the head is perhaps the most critical piece of information an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae. Little is known about how or where the visual system combines these two retinal motion signals, relative to the wealth of knowledge about the neural hierarchies involved in 2D motion processing and binocular vision. Canonical conceptions of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single cyclopean stream (lacking eye-of-origin information) and extract 1D ("component") motions; later stages then extract 2D pattern motion from the cyclopean output of the earlier stage. Here, however, we show that 3D motion perception is in fact affected by the comparison of opposite 2D pattern motions between the two eyes. Three-dimensional motion sensitivity depends systematically on pattern motion direction when dichoptically viewing gratings and plaids-and a novel "dichoptic pseudoplaid" stimulus provides strong support for use of interocular pattern motion differences by precluding potential contributions from conventional disparity-based mechanisms. These results imply the existence of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such eye-specific pattern-motion signals in models of motion processing and binocular integration.

  13. Dementia

    MedlinePlus

    ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ... living. Functions affected include memory, language skills, visual perception, problem solving, self-management, and the ability to ...

  14. Effects of visual attention on chromatic and achromatic detection sensitivities.

    PubMed

    Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko

    2014-05-01

    Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.

  15. The Effects of Solid Modeling and Visualization on Technical Problem Solving

    ERIC Educational Resources Information Center

    Koch, Douglas

    2011-01-01

    The purpose of this study was to determine whether or not the use of solid modeling software increases participants' success in solving a specified technical problem and how visualization affects their ability to solve a technical problem. Specifically, the study sought to determine if (a) students' visualization skills affect their problem…

  16. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  17. 32 CFR 813.1 - Purpose of the visual information documentation (VIDOC) program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Purpose of the visual information documentation (VIDOC) program. 813.1 Section 813.1 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE SALES AND SERVICES VISUAL INFORMATION DOCUMENTATION PROGRAM § 813.1 Purpose of the visual information documentation (VIDOC) program....

  18. Emotional Effects in Visual Information Processing

    DTIC Science & Technology

    2009-10-24

    1 Emotional Effects on Visual Information Processing FA4869-08-0004 AOARD 074018 Report October 24, 2009...TITLE AND SUBTITLE Emotional Effects in Visual Information Processing 5a. CONTRACT NUMBER FA48690810004 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...objective of this research project was to investigate how emotion influences visual information processing and the neural correlates of the effects

  19. Fragile X Mental Retardation Protein Is Required to Maintain Visual Conditioning-Induced Behavioral Plasticity by Limiting Local Protein Synthesis

    PubMed Central

    Liu, Han-Hsuan

    2016-01-01

    Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. SIGNIFICANCE STATEMENT Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced behavioral plasticity in vivo. PMID:27383604

  20. Fragile X Mental Retardation Protein Is Required to Maintain Visual Conditioning-Induced Behavioral Plasticity by Limiting Local Protein Synthesis.

    PubMed

    Liu, Han-Hsuan; Cline, Hollis T

    2016-07-06

    Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced behavioral plasticity in vivo. Copyright © 2016 the authors 0270-6474/16/367325-15$15.00/0.

  1. Web-GIS-based SARS epidemic situation visualization

    NASA Astrophysics Data System (ADS)

    Lu, Xiaolin

    2004-03-01

    In order to research, perform statistical analysis and broadcast the information of SARS epidemic situation according to the relevant spatial position, this paper proposed a unified global visualization information platform for SARS epidemic situation based on Web-GIS and scientific virtualization technology. To setup the unified global visual information platform, the architecture of Web-GIS based interoperable information system is adopted to enable public report SARS virus information to health cure center visually by using the web visualization technology. A GIS java applet is used to visualize the relationship between spatial graphical data and virus distribution, and other web based graphics figures such as curves, bars, maps and multi-dimensional figures are used to visualize the relationship between SARS virus tendency with time, patient number or locations. The platform is designed to display the SARS information in real time, simulate visually for real epidemic situation and offer an analyzing tools for health department and the policy-making government department to support the decision-making for preventing against the SARS epidemic virus. It could be used to analyze the virus condition through visualized graphics interface, isolate the areas of virus source, and control the virus condition within shortest time. It could be applied to the visualization field of SARS preventing systems for SARS information broadcasting, data management, statistical analysis, and decision supporting.

  2. The dorsal raphe modulates sensory responsiveness during arousal in zebrafish

    PubMed Central

    Yokogawa, Tohei; Hannan, Markus C.; Burgess, Harold A.

    2012-01-01

    During waking behavior animals adapt their state of arousal in response to environmental pressures. Sensory processing is regulated in aroused states and several lines of evidence imply that this is mediated at least partly by the serotonergic system. However there is little information directly showing that serotonergic function is required for state-dependent modulation of sensory processing. Here we find that zebrafish larvae can maintain a short-term state of arousal during which neurons in the dorsal raphe modulate sensory responsiveness to behaviorally relevant visual cues. Following a brief exposure to water flow, larvae show elevated activity and heightened sensitivity to perceived motion. Calcium imaging of neuronal activity after flow revealed increased activity in serotonergic neurons of the dorsal raphe. Genetic ablation of these neurons abolished the increase in visual sensitivity during arousal without affecting baseline visual function or locomotor activity. We traced projections from the dorsal raphe to a major visual area, the optic tectum. Laser ablation of the tectum demonstrated that this structure, like the dorsal raphe, is required for improved visual sensitivity during arousal. These findings reveal that serotonergic neurons of the dorsal raphe have a state-dependent role in matching sensory responsiveness to behavioral context. PMID:23100441

  3. Non-lane-discipline-based car-following model under honk environment

    NASA Astrophysics Data System (ADS)

    Rong, Ying; Wen, Huiying

    2018-04-01

    This study proposed a non-lane-discipline-based car-following model by synthetically considering the visual angles and the timid/aggressive characteristics of drivers under honk environment. We firstly derived the neutral stability condition by the linear stability theory. It showed that the parameters related to visual angles and driving characteristics of drivers under honk environment all have significant impact on the stability of non-lane-discipline traffic flow. For better understanding the inner mechanism among these factors, we further analyzed how each parameter affects the traffic flow and gained further insight into how the visual angles information influences other parameters and then influences the non-lane-discipline traffic flow under honk environment. And the results showed that the other aspects such as driving characteristics of drivers or honk effect are all interacted with the "Visual-Angle Factor". And the effect of visual angle is not just to say simply it has larger stable region or not as the existing studies. Finally, to verify the proposed model, we carried out the numerical simulation under the periodic boundary condition. And the results of numerical simulation are agreed well with the theoretical findings.

  4. Visual sensation during pecking in pigeons.

    PubMed

    Ostheim, J

    1997-10-01

    During the final down-thrust of a pigeon's head, the eyes are closed gradually, a response that was thought to block visual input. This phase of pecking was therefore assumed to be under feed-forward control exclusively. Analysis of high resolution video-recordings showed that visual information collected during the down-thrust of the head could be used for 'on-line' modulations of pecks in progress. We thus concluded that the final down-thrust of the head is not exclusively controlled by feed-forward mechanisms but also by visual feedback components. We could further establish that as a rule the eyes are never closed completely but instead the eyelids form a slit which leaves a part of the pupil uncovered. The width of the slit between the pigeon' eyelids is highly sensitive to both, ambient luminance and the visual background against which seeds are offered. It was concluded that eyelid slits increase the focal depth of retinal images at extreme near-field viewing-conditions. Applying pharmacological methods we could confirm that pupil size and eyelid slit width are controlled through conjoint neuronal mechanisms. This shared neuronal network is particularly sensitive to drugs that affect dopamine receptors.

  5. Your visual system provides all the information you need to make moral judgments about generic visual events.

    PubMed

    De Freitas, Julian; Alvarez, George A

    2018-05-28

    To what extent are people's moral judgments susceptible to subtle factors of which they are unaware? Here we show that we can change people's moral judgments outside of their awareness by subtly biasing perceived causality. Specifically, we used subtle visual manipulations to create visual illusions of causality in morally relevant scenarios, and this systematically changed people's moral judgments. After demonstrating the basic effect using simple displays involving an ambiguous car collision that ends up injuring a person (E1), we show that the effect is sensitive on the millisecond timescale to manipulations of task-irrelevant factors that are known to affect perceived causality, including the duration (E2a) and asynchrony (E2b) of specific task-irrelevant contextual factors in the display. We then conceptually replicate the effect using a different paradigm (E3a), and also show that we can eliminate the effect by interfering with motion processing (E3b). Finally, we show that the effect generalizes across different kinds of moral judgments (E3c). Combined, these studies show that obligatory, abstract inferences made by the visual system influence moral judgments. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A sublethal dose of a neonicotinoid insecticide disrupts visual processing and collision avoidance behaviour in Locusta migratoria.

    PubMed

    Parkinson, Rachel H; Little, Jacelyn M; Gray, John R

    2017-04-20

    Neonicotinoids are known to affect insect navigation and vision, however the mechanisms of these effects are not fully understood. A visual motion sensitive neuron in the locust, the Descending Contralateral Movement Detector (DCMD), integrates visual information and is involved in eliciting escape behaviours. The DCMD receives coded input from the compound eyes and monosynaptically excites motorneurons involved in flight and jumping. We show that imidacloprid (IMD) impairs neural responses to visual stimuli at sublethal concentrations, and these effects are sustained two and twenty-four hours after treatment. Most significantly, IMD disrupted bursting, a coding property important for motion detection. Specifically, IMD reduced the DCMD peak firing rate within bursts at ecologically relevant doses of 10 ng/g (ng IMD per g locust body weight). Effects on DCMD firing translate to deficits in collision avoidance behaviours: exposure to 10 ng/g IMD attenuates escape manoeuvers while 100 ng/g IMD prevents the ability to fly and walk. We show that, at ecologically-relevant doses, IMD causes significant and lasting impairment of an important pathway involved with visual sensory coding and escape behaviours. These results show, for the first time, that a neonicotinoid pesticide directly impairs an important, taxonomically conserved, motion-sensitive visual network.

  7. Energetically optimal travel across terrain: visualizations and a new metric of geographic distance with anthropological applications

    NASA Astrophysics Data System (ADS)

    Wood, Brian M.; Wood, Zoë J.

    2006-01-01

    We present a visualization and computation tool for modeling the caloric cost of pedestrian travel across three dimensional terrains. This tool is being used in ongoing archaeological research that analyzes how costs of locomotion affect the spatial distribution of trails and artifacts across archaeological landscapes. Throughout human history, traveling by foot has been the most common form of transportation, and therefore analyses of pedestrian travel costs are important for understanding prehistoric patterns of resource acquisition, migration, trade, and political interaction. Traditionally, archaeologists have measured geographic proximity based on "as the crow flies" distance. We propose new methods for terrain visualization and analysis based on measuring paths of least caloric expense, calculated using well established metabolic equations. Our approach provides a human centered metric of geographic closeness, and overcomes significant limitations of available Geographic Information System (GIS) software. We demonstrate such path computations and visualizations applied to archaeological research questions. Our system includes tools to visualize: energetic cost surfaces, comparisons of the elevation profiles of shortest paths versus least cost paths, and the display of paths of least caloric effort on Digital Elevation Models (DEMs). These analysis tools can be applied to calculate and visualize 1) likely locations of prehistoric trails and 2) expected ratios of raw material types to be recovered at archaeological sites.

  8. The forward masking effects of low-level laser glare on target location performance in a visual search task

    NASA Astrophysics Data System (ADS)

    Reddix, M. D.; Dandrea, J. A.; Collyer, P. D.

    1992-01-01

    The present study examined the effects of low-intensity laser glue, far below a level that would cause ocular damage or flashblindness, on the visually guided performance of aviators. With a forward-masking paradigm, this study showed that the time at which laser glare is experienced, relative to initial acquisition of visual information, differentially affects the speed and accuracy of target-location performance. Brief exposure (300 ms) to laser glare, terminating with a visual scene's onset, produced significant decrements in target-location performance relative to a no-glare control whereas a 150 and 300-ms delay of display onset (DDO) had very little effect. The intensity of the light entering the eye and producing these effects was far below the Maximum Permissible Exposure (MPE) limit for safe viewing of coherent light produced by an argon laser. In addition, these effects were modulated by the distance of the target from the center of the visual display. This study demonstrated that the presence of laser glare is not sufficient, in and of itself, to diminish target-location performance. The time at which laser glare is experienced is an important factor in determining the probability and extent of visually mediated performance decrements.

  9. Artful terms: A study on aesthetic word usage for visual art versus film and music.

    PubMed

    Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan

    2012-01-01

    Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica139 187-201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms.

  10. Artful terms: A study on aesthetic word usage for visual art versus film and music

    PubMed Central

    Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan

    2012-01-01

    Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica 139 187–201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms. PMID:23145287

  11. Multisensory integration across the senses in young and old adults

    PubMed Central

    Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee

    2011-01-01

    Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545

  12. Gender differences in identifying emotions from auditory and visual stimuli.

    PubMed

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  13. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  14. Visual disability and quality of life in glaucoma patients.

    PubMed

    Cesareo, Massimo; Ciuffoletti, Elena; Ricci, Federico; Missiroli, Filippo; Giuliano, Mario Alberto; Mancino, Raffaele; Nucci, Carlo

    2015-01-01

    Glaucoma is an optic neuropathy that can result in progressive and irreversible vision loss, thereby affecting quality of life (QoL) of patients. Several studies have shown a strong correlation between visual field damage and visual disability in patients with glaucoma, even in the early stages of the disease. Visual impairment due to glaucoma affects normal daily activities required for independent living, such as driving, walking, and reading. There is no generally accepted instrument for assessing quality of life in glaucoma patients; different factors involved in visual disability from the disease are difficult to quantify and not easily standardized. This chapter summarizes recent works from clinical and epidemiological studies, which describe how glaucoma affects the performance of important vision-related activities and QoL. © 2015 Elsevier B.V. All rights reserved.

  15. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    PubMed

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-09

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.

  16. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  17. Perception and control of rotorcraft flight

    NASA Technical Reports Server (NTRS)

    Owen, Dean H.

    1991-01-01

    Three topics which can be applied to rotorcraft flight are examined: (1) the nature of visual information; (2) what visual information is informative about; and (3) the control of visual information. The anchorage of visual perception is defined as the distribution of structure in the surrounding optical array or the distribution of optical structure over the retinal surface. A debate was provoked about whether the referent of visual event perception, and in turn control, is optical motion, kinetics, or dynamics. The interface of control theory and visual perception is also considered. The relationships among these problems is the basis of this article.

  18. Unisensory processing and multisensory integration in schizophrenia: A high-density electrical mapping study

    PubMed Central

    Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.

    2011-01-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011

  19. Making the invisible visible: verbal but not visual cues enhance visual detection.

    PubMed

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  20. Advancing Water Science through Data Visualization

    NASA Astrophysics Data System (ADS)

    Li, X.; Troy, T.

    2014-12-01

    As water scientists, we are increasingly handling larger and larger datasets with many variables, making it easy to lose ourselves in the details. Advanced data visualization will play an increasingly significant role in propelling the development of water science in research, economy, policy and education. It can enable analysis within research and further data scientists' understanding of behavior and processes and can potentially affect how the public, whom we often want to inform, understands our work. Unfortunately for water scientists, data visualization is approached in an ad hoc manner when a more formal methodology or understanding could potentially significantly improve both research within the academy and outreach to the public. Firstly to broaden and deepen scientific understanding, data visualization can allow for more analyzed targets to be processed simultaneously and can represent the variables effectively, finding patterns, trends and relationships; thus it can even explores the new research direction or branch of water science. Depending on visualization, we can detect and separate the pivotal and trivial influential factors more clearly to assume and abstract the original complex target system. Providing direct visual perception of the differences between observation data and prediction results of models, data visualization allows researchers to quickly examine the quality of models in water science. Secondly data visualization can also improve public awareness and perhaps influence behavior. Offering decision makers clearer perspectives of potential profits of water, data visualization can amplify the economic value of water science and also increase relevant employment rates. Providing policymakers compelling visuals of the role of water for social and natural systems, data visualization can advance the water management and legislation of water conservation. By building the publics' own data visualization through apps and games about water science, they can absorb the knowledge about water indirectly and incite the awareness of water problems.

  1. Psychological distress and visual functioning in relation to vision-related disability in older individuals with cataracts.

    PubMed

    Walker, J G; Anstey, K J; Lord, S R

    2006-05-01

    To determine whether demographic, health status and psychological functioning measures, in addition to impaired visual acuity, are related to vision-related disability. Participants were 105 individuals (mean age=73.7 years) with cataracts requiring surgery and corrected visual acuity in the better eye of 6/24 to 6/36 were recruited from waiting lists at three public out-patient ophthalmology clinics. Visual disability was measured with the Visual Functioning-14 survey. Visual acuity was assessed using better and worse eye logMAR scores and the Melbourne Edge Test (MET) for edge contrast sensitivity. Data relating to demographic information, depression, anxiety and stress, health care and medication use and numbers of co-morbid conditions were obtained. Principal component analysis revealed four meaningful factors that accounted for 75% of the variance in visual disability: recreational activities, reading and fine work, activities of daily living and driving behaviour. Multiple regression analyses determined that visual acuity variables were the only significant predictors of overall vision-related functioning and difficulties with reading and fine work. For the remaining visual disability domains, non-visual factors were also significant predictors. Difficulties with recreational activities were predicted by stress, as well as worse eye visual acuity, and difficulties with activities of daily living were associated with self-reported health status, age and depression as well as MET contrast scores. Driving behaviour was associated with sex (with fewer women driving), depression, anxiety and stress scores, and MET contrast scores. Vision-related disability is common in older individuals with cataracts. In addition to visual acuity, demographic, psychological and health status factors influence the severity of vision-related disability, affecting recreational activities, activities of daily living and driving.

  2. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  3. A Notation for Rapid Specification of Information Visualization

    ERIC Educational Resources Information Center

    Lee, Sang Yun

    2013-01-01

    This thesis describes a notation for rapid specification of information visualization, which can be used as a theoretical framework of integrating various types of information visualization, and its applications at a conceptual level. The notation is devised to codify the major characteristics of data/visual structures in conventionally-used…

  4. Gender-specific effects of emotional modulation on visual temporal order thresholds.

    PubMed

    Liang, Wei; Zhang, Jiyuan; Bao, Yan

    2015-09-01

    Emotions affect temporal information processing in the low-frequency time window of a few seconds, but little is known about their effect in the high-frequency domain of some tens of milliseconds. The present study aims to investigate whether negative and positive emotional states influence the ability to discriminate the temporal order of visual stimuli, and whether gender plays a role in temporal processing. Due to the hemispheric lateralization of emotion, a hemispheric asymmetry between the left and the right visual field might be expected. Using a block design, subjects were primed with neutral, negative and positive emotional pictures before performing temporal order judgment tasks. Results showed that male subjects exhibited similarly reduced order thresholds under negative and positive emotional states, while female subjects demonstrated increased threshold under positive emotional state and reduced threshold under negative emotional state. Besides, emotions influenced female subjects more intensely than male subjects, and no hemispheric lateralization was observed. These observations indicate an influence of emotional states on temporal order processing of visual stimuli, and they suggest a gender difference, which is possibly associated with a different emotional stability.

  5. Associated impairment of the categories of conspecifics and biological entities: cognitive and neuroanatomical aspects of a new case.

    PubMed

    Capitani, Erminio; Chieppa, Francesca; Laiacona, Marcella

    2010-05-01

    Case A.C.A. presented an associated impairment of visual recognition and semantic knowledge for celebrities and biological objects. This case was relevant for (a) the neuroanatomical correlations, and (b) the relationship between visual recognition and semantics within the biological domain and the conspecifics domain. A.C.A. was not affected by anterior temporal damage. Her bilateral vascular lesions were localized on the medial and inferior temporal gyrus on the right and on the intermediate fusiform gyrus on the left, without concomitant lesions of the parahippocampal gyrus or posterior fusiform. Data analysis was based on a novel methodology developed to estimate the rate of stored items in the visual structural description system (SDS) or in the face recognition unit. For each biological object, no particular correlation was found between the visual information accessed through the semantic system and that tapped by the picture reality judgement. Findings are discussed with reference to whether a putative resource commonality is likely between biological objects and conspecifics, and whether or not either category may depend on an exclusive neural substrate.

  6. Motor learning and working memory in children born preterm: a systematic review.

    PubMed

    Jongbloed-Pereboom, Marjolein; Janssen, Anjo J W M; Steenbergen, Bert; Nijhuis-van der Sanden, Maria W G

    2012-04-01

    Children born preterm have a higher risk for developing motor, cognitive, and behavioral problems. Motor problems can occur in combination with working memory problems, and working memory is important for explicit learning of motor skills. The relation between motor learning and working memory has never been reviewed. The goal of this review was to provide an overview of motor learning, visual working memory and the role of working memory on motor learning in preterm children. A systematic review conducted in four databases identified 38 relevant articles, which were evaluated for methodological quality. Only 4 of 38 articles discussed motor learning in preterm children. Thirty-four studies reported on visual working memory; preterm birth affected performance on visual working memory tests. Information regarding motor learning and the role of working memory on the different components of motor learning was not available. Future research should address this issue. Insight in the relation between motor learning and visual working memory may contribute to the development of evidence based intervention programs for children born preterm. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  8. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    PubMed Central

    Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide

    2015-01-01

    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936

  9. Visual degradation in Leonardo da Vinci's iconic self-portrait: A nanoscale study

    NASA Astrophysics Data System (ADS)

    Conte, A. Mosca; Pulci, O.; Misiti, M. C.; Lojewska, J.; Teodonio, L.; Violante, C.; Missori, M.

    2014-06-01

    The discoloration of ancient paper, due to the development of oxidized groups acting as chromophores in its chief component, cellulose, is responsible for severe visual degradation in ancient artifacts. By adopting a non-destructive approach based on the combination of optical reflectance measurements and time-dependent density functional theory ab-initio calculations, we describe and quantify the chromophores affecting Leonardo da Vinci's iconic self-portrait. Their relative concentrations are very similar to those measured in modern and ancient samples aged in humid environments. This analysis quantifies the present level of optical degradation of the Leonardo da Vinci's self-portrait which, compared with future measurements, will assess its degradation rate. This is a fundamental information in order to plan appropriate conservation strategies.

  10. Sensitive periods in affective development: nonlinear maturation of fear learning.

    PubMed

    Hartley, Catherine A; Lee, Francis S

    2015-01-01

    At specific maturational stages, neural circuits enter sensitive periods of heightened plasticity, during which the development of both brain and behavior are highly receptive to particular experiential information. A relatively advanced understanding of the regulatory mechanisms governing the initiation, closure, and reinstatement of sensitive period plasticity has emerged from extensive research examining the development of the visual system. In this article, we discuss a large body of work characterizing the pronounced nonlinear changes in fear learning and extinction that occur from childhood through adulthood, and their underlying neural substrates. We draw upon the model of sensitive period regulation within the visual system, and present burgeoning evidence suggesting that parallel mechanisms may regulate the qualitative changes in fear learning across development.

  11. Sensitive Periods in Affective Development: Nonlinear Maturation of Fear Learning

    PubMed Central

    Hartley, Catherine A; Lee, Francis S

    2015-01-01

    At specific maturational stages, neural circuits enter sensitive periods of heightened plasticity, during which the development of both brain and behavior are highly receptive to particular experiential information. A relatively advanced understanding of the regulatory mechanisms governing the initiation, closure, and reinstatement of sensitive period plasticity has emerged from extensive research examining the development of the visual system. In this article, we discuss a large body of work characterizing the pronounced nonlinear changes in fear learning and extinction that occur from childhood through adulthood, and their underlying neural substrates. We draw upon the model of sensitive period regulation within the visual system, and present burgeoning evidence suggesting that parallel mechanisms may regulate the qualitative changes in fear learning across development. PMID:25035083

  12. Delayed visual feedback affects both manual tracking and grip force control when transporting a handheld object.

    PubMed

    Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric

    2010-08-01

    When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.

  13. The role of visual deprivation and experience on the performance of sensory substitution devices.

    PubMed

    Stronks, H Christiaan; Nau, Amy C; Ibbotson, Michael R; Barnes, Nick

    2015-10-22

    It is commonly accepted that the blind can partially compensate for their loss of vision by developing enhanced abilities with their remaining senses. This visual compensation may be related to the fact that blind people rely on their other senses in everyday life. Many studies have indeed shown that experience plays an important role in visual compensation. Numerous neuroimaging studies have shown that the visual cortices of the blind are recruited by other functional brain areas and can become responsive to tactile or auditory input instead. These cross-modal plastic changes are more pronounced in the early blind compared to late blind individuals. The functional consequences of cross-modal plasticity on visual compensation in the blind are debated, as are the influences of various etiologies of vision loss (i.e., blindness acquired early or late in life). Distinguishing between the influences of experience and visual deprivation on compensation is especially relevant for rehabilitation of the blind with sensory substitution devices. The BrainPort artificial vision device and The vOICe are assistive devices for the blind that redirect visual information to another intact sensory system. Establishing how experience and different etiologies of vision loss affect the performance of these devices may help to improve existing rehabilitation strategies, formulate effective selection criteria and develop prognostic measures. In this review we will discuss studies that investigated the influence of training and visual deprivation on the performance of various sensory substitution approaches. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Orienting attention in visual space by nociceptive stimuli: investigation with a temporal order judgment task based on the adaptive PSI method.

    PubMed

    Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry

    2017-07-01

    Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.

  15. Competition between conceptual relations affects compound recognition: the role of entropy.

    PubMed

    Schmidtke, Daniel; Kuperman, Victor; Gagné, Christina L; Spalding, Thomas L

    2016-04-01

    Previous research has suggested that the conceptual representation of a compound is based on a relational structure linking the compound's constituents. Existing accounts of the visual recognition of modifier-head or noun-noun compounds posit that the process involves the selection of a relational structure out of a set of competing relational structures associated with the same compound. In this article, we employ the information-theoretic metric of entropy to gauge relational competition and investigate its effect on the visual identification of established English compounds. The data from two lexical decision megastudies indicates that greater entropy (i.e., increased competition) in a set of conceptual relations associated with a compound is associated with longer lexical decision latencies. This finding indicates that there exists competition between potential meanings associated with the same complex word form. We provide empirical support for conceptual composition during compound word processing in a model that incorporates the effect of the integration of co-activated and competing relational information.

  16. Visual information without thermal energy may induce thermoregulatory-like cardiovascular responses

    PubMed Central

    2013-01-01

    Background Human core body temperature is kept quasi-constant regardless of varying thermal environments. It is well known that physiological thermoregulatory systems are under the control of central and peripheral sensory organs that are sensitive to thermal energy. If these systems wrongly respond to non-thermal stimuli, it may disturb human homeostasis. Methods Fifteen participants viewed video images evoking hot or cold impressions in a thermally constant environment. Cardiovascular indices were recorded during the experiments. Correlations between the ‘hot-cold’ impression scores and cardiovascular indices were calculated. Results The changes of heart rate, cardiac output, and total peripheral resistance were significantly correlated with the ‘hot-cold’ impression scores, and the tendencies were similar to those in actual thermal environments corresponding to the impressions. Conclusions The present results suggest that visual information without any thermal energy can affect physiological thermoregulatory systems at least superficially. To avoid such ‘virtual’ environments disturbing human homeostasis, further study and more attention are needed. PMID:24373765

  17. Visual and visuomotor processing of hands and tools as a case study of cross talk between the dorsal and ventral streams.

    PubMed

    Almeida, Jorge; Amaral, Lénia; Garcea, Frank E; Aguiar de Sousa, Diana; Xu, Shan; Mahon, Bradford Z; Martins, Isabel Pavão

    2018-05-24

    A major principle of organization of the visual system is between a dorsal stream that processes visuomotor information and a ventral stream that supports object recognition. Most research has focused on dissociating processing across these two streams. Here we focus on how the two streams interact. We tested neurologically-intact and impaired participants in an object categorization task over two classes of objects that depend on processing within both streams-hands and tools. We measured how unconscious processing of images from one of these categories (e.g., tools) affects the recognition of images from the other category (i.e., hands). Our findings with neurologically-intact participants demonstrated that processing an image of a hand hampers the subsequent processing of an image of a tool, and vice versa. These results were not present in apraxic patients (N = 3). These findings suggest local and global inhibitory processes working in tandem to co-register information across the two streams.

  18. Reporting pesticide assessment results to farmworker families: development, implementation, and evaluation of a risk communication strategy.

    PubMed Central

    Quandt, Sara A; Doran, Alicia M; Rao, Pamela; Hoppin, Jane A; Snively, Beverly M; Arcury, Thomas A

    2004-01-01

    The collection of environmental samples presents a responsibility to return information to the affected participants. Explaining complex and often ambiguous scientific information to a lay audience is a challenge. As shown by environmental justice research, this audience frequently has limited formal education, increasing the challenge for researchers to explain the data collected, the risk indicated by the findings, and action the affected community should take. In this study we describe the development and implementation of a risk communication strategy for environmental pesticide samples collected in the homes of Latino/a migrant and seasonal farmworkers in a community-based participatory research project. The communication strategy was developed with community input and was based on face-to-face meetings with members of participating households. Using visual displays of data effectively conveyed information about individual household contamination and placed it in the context of community findings. The lack of national reference data and definitive standards for action necessitated a simplified risk message. We review the strengths and weaknesses of such an approach and suggest areas for future research in risk communication to communities affected by environmental health risks. PMID:15064174

  19. Visual field defects after temporal lobe resection for epilepsy.

    PubMed

    Steensberg, Alvilda T; Olsen, Ane Sophie; Litman, Minna; Jespersen, Bo; Kolko, Miriam; Pinborg, Lars H

    2018-01-01

    To determine visual field defects (VFDs) using methods of varying complexity and compare results with subjective symptoms in a population of newly operated temporal lobe epilepsy patients. Forty patients were included in the study. Two patients failed to perform VFD testing. Humphrey Field Analyzer (HFA) perimetry was used as the gold standard test to detect VFDs. All patients performed a web-based visual field test called Damato Multifixation Campimetry Online (DMCO). A bedside confrontation visual field examination ad modum Donders was extracted from the medical records in 27/38 patients. All participants had a consultation by an ophthalmologist. A questionnaire described the subjective complaints. A VFD in the upper quadrant was demonstrated with HFA in 29 (76%) of the 38 patients after surgery. In 27 patients tested ad modum Donders, the sensitivity of detecting a VFD was 13%. Eight patients (21%) had a severe VFD similar to a quadrant anopia, thus, questioning their permission to drive a car. In this group of patients, a VFD was demonstrated in one of five (sensitivity=20%) ad modum Donders and in seven of eight (sensitivity=88%) with DMCO. Subjective symptoms were only reported by 28% of the patients with a VFD and in two of eight (sensitivity=25%) with a severe VFD. Most patients (86%) considered VFD information mandatory. VFD continue to be a frequent adverse event after epilepsy surgery in the medial temporal lobe and may affect the permission to drive a car in at least one in five patients. Subjective symptoms and bedside visual field testing ad modum Donders are not sensitive to detect even a severe VFD. Newly developed web-based visual field test methods appear sensitive to detect a severe VFD but perimetry remains the golden standard for determining if visual standards for driving is fulfilled. Patients consider VFD information as mandatory. Copyright © 2017. Published by Elsevier Ltd.

  20. Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model

    PubMed Central

    Marsh, John E.; Campbell, Tom A.

    2016-01-01

    The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e.g., speech in noise, speech in reverberatory environments. The assumptions of a new early filter model are consistent with these findings: A subcortical early filter, with a predictive selectivity based on acoustical (linguistic) context and foreknowledge, is under cholinergic top-down control. A prefrontal capacity limitation constrains this top-down control as is guided by the cholinergic processing of contextual information in working memory. PMID:27242396

Top