Sample records for visual presentations

  1. Young children's coding and storage of visual and verbal material.

    PubMed

    Perlmutter, M; Myers, N A

    1975-03-01

    36 preschool children (mean age 4.2 years) were each tested on 3 recognition memory lists differing in test mode (visual only, verbal only, combined visual-verbal). For one-third of the children, original list presentation was visual only, for another third, presentation was verbal only, and the final third received combined visual-verbal presentation. The subjects generally performed at a high level of correct responding. Verbal-only presentation resulted in less correct recognition than did either visual-only or combined visual-verbal presentation. However, because performances under both visual-only and combined visual-verbal presentation were statistically comparable, and a high level of spontaneous labeling was observed when items were presented only visually, a dual-processing conceptualization of memory in 4-year-olds was suggested.

  2. Temporal Influence on Awareness

    DTIC Science & Technology

    1995-12-01

    43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and

  3. A Comparison of the Connotative Meaning of Visuals Presented Singly and in Simultaneous and Sequential Juxtapositions.

    ERIC Educational Resources Information Center

    Noland, Mildred Jean

    A study was conducted investigating whether a sequence of visuals presented in a serial manner differs in connotative meaning from the same set of visuals presented simultaneously. How the meanings of pairs of shots relate to their constituent visuals was also explored. Sixteen pairs of visuals were presented to both male and female subjects in…

  4. Presentation-Oriented Visualization Techniques.

    PubMed

    Kosara, Robert

    2016-01-01

    Data visualization research focuses on data exploration and analysis, yet the vast majority of visualizations people see were created for a different purpose: presentation. Whether we are talking about charts showing data to help make a presenter's point, data visuals created to accompany a news story, or the ubiquitous infographics, many more people consume charts than make them. Traditional visualization techniques treat presentation as an afterthought, but are there techniques uniquely suited to data presentation but not necessarily ideal for exploration and analysis? This article focuses on presentation-oriented techniques, considering their usefulness for presentation first and any other purposes as secondary.

  5. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  6. Meaning and Identities: A Visual Performative Pedagogy for Socio-Cultural Learning

    ERIC Educational Resources Information Center

    Grushka, Kathryn

    2009-01-01

    In this article I present personalised socio-cultural inquiry in visual art education as a critical and expressive material praxis. The model of "Visual Performative Pedagogy and Communicative Proficiency for the Visual Art Classroom" is presented as a legitimate means of manipulating visual codes, communicating meaning and mediating…

  7. Tips for better visual elements in posters and podium presentations.

    PubMed

    Zerwic, J J; Grandfield, K; Kavanaugh, K; Berger, B; Graham, L; Mershon, M

    2010-08-01

    The ability to effectively communicate through posters and podium presentations using appropriate visual content and style is essential for health care educators. To offer suggestions for more effective visual elements of posters and podium presentations. We present the experiences of our multidisciplinary publishing group, whose combined experiences and collaboration have provided us with an understanding of what works and how to achieve success when working on presentations and posters. Many others would offer similar advice, as these guidelines are consistent with effective presentation. FINDINGS/SUGGESTIONS: Certain visual elements should be attended to in any visual presentation: consistency, alignment, contrast and repetition. Presentations should be consistent in font size and type, line spacing, alignment of graphics and text, and size of graphics. All elements should be aligned with at least one other element. Contrasting light background with dark text (and vice versa) helps an audience read the text more easily. Standardized formatting lets viewers know when they are looking at similar things (tables, headings, etc.). Using a minimal number of colors (four at most) helps the audience more easily read text. For podium presentations, have one slide for each minute allotted for speaking. The speaker is also a visual element; one should not allow the audience's view of either the presentation or presenter to be blocked. Making eye contact with the audience also keeps them visually engaged. Health care educators often share information through posters and podium presentations. These tips should help the visual elements of presentations be more effective.

  8. Visual Disability Among Juvenile Open-angle Glaucoma Patients.

    PubMed

    Gupta, Viney; Ganesan, Vaitheeswaran L; Kumar, Sandip; Chaurasia, Abadh K; Malhotra, Sumit; Gupta, Shikha

    2018-04-01

    Juvenile onset primary open-angle glaucoma (JOAG) unlike adult onset primary open-angle glaucoma presents with high intraocular pressure and diffuse visual field loss, which if left untreated leads to severe visual disability. The study aimed to evaluate the extent of visual disability among JOAG patients presenting to a tertiary eye care facility. Visual acuity and perimetry records of unrelated JOAG patients presenting to our Glaucoma facility were analyzed. Low vision and blindness was categorized by the WHO criteria and percentage impairment was calculated as per the guidelines provided by the American Medical Association (AMA). Fifty-two (15%) of the 348 JOAG patients were bilaterally blind at presentation and 32 (9%) had low vision according to WHO criteria. Ninety JOAG patients (26%) had a visual impairment of 75% or more. Visual disability at presentation among JOAG patients is high. This entails a huge economic burden, given their young age and associated social responsibilities.

  9. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  10. Dissociation of the Neural Correlates of Visual and Auditory Contextual Encoding

    ERIC Educational Resources Information Center

    Gottlieb, Lauren J.; Uncapher, Melina R.; Rugg, Michael D.

    2010-01-01

    The present study contrasted the neural correlates of encoding item-context associations according to whether the contextual information was visual or auditory. Subjects (N = 20) underwent fMRI scanning while studying a series of visually presented pictures, each of which co-occurred with either a visually or an auditorily presented name. The task…

  11. Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?

    PubMed Central

    Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.

    2015-01-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799

  12. Why do pictures, but not visual words, reduce older adults' false memories?

    PubMed

    Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R

    2015-09-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  13. Role of inter-hemispheric transfer in generating visual evoked potentials in V1-damaged brain hemispheres

    PubMed Central

    Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.

    2015-01-01

    Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450

  14. Tips for Better Visual Elements in Posters and Podium Presentations

    PubMed Central

    Zerwic, JJ; Grandfield, K; Kavanaugh, K; Berger, B; Graham, L; Mershon, M

    2010-01-01

    Context The ability to effectively communicate through posters and podium presentations using appropriate visual content and style is essential for health care educators. Objectives To offer suggestions for more effective visual elements of posters and podium presentations. Methods We present the experiences of our multidisciplinary publishing group, whose combined experiences and collaboration have provided us with an understanding of what works and how to achieve success when working on presentations and posters. Many others would offer similar advice, as these guidelines are consistent with effective presentation. Findings/Suggestions Certain visual elements should be attended to in any visual presentation: consistency, alignment, contrast and repetition. Presentations should be consistent in font size and type, line spacing, alignment of graphics and text, and size of graphics. All elements should be aligned with at least one other element. Contrasting light background with dark text (and vice versa) helps an audience read the text more easily. Standardized formatting lets viewers know when they are looking at similar things (tables, headings, etc.). Using a minimal number of colors (four at most) helps the audience more easily read text. For podium presentations, have one slide for each minute allotted for speaking. The speaker is also a visual element; one should not allow the audience’s view of either the presentation or presenter to be blocked. Making eye contact with the audience also keeps them visually engaged. Conclusions Health care educators often share information through posters and podium presentations. These tips should help the visual elements of presentations be more effective. PMID:20853236

  15. Effects of Presentation Type and Visual Control in Numerosity Discrimination: Implications for Number Processing?

    PubMed Central

    Smets, Karolien; Moors, Pieter; Reynvoet, Bert

    2016-01-01

    Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967

  16. Study of target and non-target interplay in spatial attention task.

    PubMed

    Sweeti; Joshi, Deepak; Panigrahi, B K; Anand, Sneh; Santhosh, Jayasree

    2018-02-01

    Selective visual attention is the ability to selectively pay attention to the targets while inhibiting the distractors. This paper aims to study the targets and non-targets interplay in spatial attention task while subject attends to the target object present in one visual hemifield and ignores the distractor present in another visual hemifield. This paper performs the averaged evoked response potential (ERP) analysis and time-frequency analysis. ERP analysis agrees to the left hemisphere superiority over late potentials for the targets present in right visual hemifield. Time-frequency analysis performed suggests two parameters i.e. event-related spectral perturbation (ERSP) and inter-trial coherence (ITC). These parameters show the same properties for the target present in either of the visual hemifields but show the difference while comparing the activity corresponding to the targets and non-targets. In this way, this study helps to visualise the difference between targets present in the left and right visual hemifields and, also the targets and non-targets present in the left and right visual hemifields. These results could be utilised to monitor subjects' performance in brain-computer interface (BCI) and neurorehabilitation.

  17. Vision

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.

    1973-01-01

    Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.

  18. Visual Exemplification and Skin Cancer: The Utility of Exemplars in Promoting Skin Self-Exams and Atypical Nevi Identification.

    PubMed

    King, Andy J

    2016-07-01

    The present article reports an experiment investigating untested propositions of exemplification theory in the context of messages promoting early melanoma detection. The study tested visual exemplar presentation types, incorporating visual persuasion principles into the study of exemplification theory and strategic message design. Compared to a control condition, representative visual exemplification was more effective at increasing message effectiveness by eliciting a surprise response, which is consistent with predictions of exemplification theory. Furthermore, participant perception of congruency between the images and text interacted with the type of visual exemplification to explain variation in message effectiveness. Different messaging strategies influenced decision making as well, with the presentation of visual exemplars resulting in people judging the atypicality of moles more conservatively. Overall, results suggest that certain visual messaging strategies may result in unintended effects of presenting people information about skin cancer. Implications for practice are discussed.

  19. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  20. Visual feedback in stuttering therapy

    NASA Astrophysics Data System (ADS)

    Smolka, Elzbieta

    1997-02-01

    The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.

  1. An insect-inspired model for visual binding II: functional analysis and visual attention.

    PubMed

    Northcutt, Brandon D; Higgins, Charles M

    2017-04-01

    We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

  2. CONTROLLING STUDENT RESPONSES DURING VISUAL PRESENTATIONS--STUDIES IN TELEVISED INSTRUCTION, THE ROLE OF VISUALS IN VERBAL LEARNING, REPORT 2.

    ERIC Educational Resources Information Center

    GROPPER, GEORGE L.

    THIS IS A REPORT OF TWO STUDIES IN WHICH PRINCIPLES OF PROGRAMED INSTRUCTION WERE ADAPTED FOR VISUAL PRESENTATIONS. SCIENTIFIC DEMONSTRATIONS WERE PREPARED WITH A VISUAL PROGRAM AND A VERBAL PROGRAM ON--(1) ARCHIMEDES' LAW AND (2) FORCE AND PRESSURE. RESULTS SUGGESTED THAT RESPONSES ARE MORE READILY BROUGHT UNDER THE CONTROL OF VISUAL PRESENTATION…

  3. Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter

    2011-01-01

    The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…

  4. Presentation of Information on Visual Displays.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…

  5. More is still not better: testing the perturbation model of temporal reference memory across different modalities and tasks.

    PubMed

    Ogden, Ruth S; Jones, Luke A

    2009-05-01

    The ability of the perturbation model (Jones & Wearden, 2003) to account for reference memory function in a visual temporal generalization task and auditory and visual reproduction tasks was examined. In all tasks the number of presentations of the standard was manipulated (1, 3, or 5), and its effect on performance was compared. In visual temporal generalization the number of presentations of the standard did not affect the number of times the standard was correctly identified, nor did it affect the overall temporal generalization gradient. In auditory reproduction there was no effect of the number of times the standard was presented on mean reproductions. In visual reproduction mean reproductions were shorter when the standard was only presented once; however, this effect was reduced when a visual cue was provided before the first presentation of the standard. Whilst the results of all experiments are best accounted for by the perturbation model there appears to be some attentional benefit to multiple presentations of the standard in visual reproduction.

  6. Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.

    PubMed

    Kim, Jeesun; Davis, Chris; Groot, Christopher

    2009-12-01

    This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.

  7. Visual mental image generation does not overlap with visual short-term memory: a dual-task interference study.

    PubMed

    Borst, Gregoire; Niven, Elaine; Logie, Robert H

    2012-04-01

    Visual mental imagery and working memory are often assumed to play similar roles in high-order functions, but little is known of their functional relationship. In this study, we investigated whether similar cognitive processes are involved in the generation of visual mental images, in short-term retention of those mental images, and in short-term retention of visual information. Participants encoded and recalled visually or aurally presented sequences of letters under two interference conditions: spatial tapping or irrelevant visual input (IVI). In Experiment 1, spatial tapping selectively interfered with the retention of sequences of letters when participants generated visual mental images from aural presentation of the letter names and when the letters were presented visually. In Experiment 2, encoding of the sequences was disrupted by both interference tasks. However, in Experiment 3, IVI interfered with the generation of the mental images, but not with their retention, whereas spatial tapping was more disruptive during retention than during encoding. Results suggest that the temporary retention of visual mental images and of visual information may be supported by the same visual short-term memory store but that this store is not involved in image generation.

  8. Visual Displays and Contextual Presentations in Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Park, Ok-choon

    1998-01-01

    Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…

  9. The effect of two different visual presentation modalities on the narratives of mainstream grade 3 children.

    PubMed

    Klop, D; Engelbrecht, L

    2013-12-01

    This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.

  10. Effects of age, gender, and stimulus presentation period on visual short-term memory.

    PubMed

    Kunimi, Mitsunobu

    2016-01-01

    This study focused on age-related changes in visual short-term memory using visual stimuli that did not allow verbal encoding. Experiment 1 examined the effects of age and the length of the stimulus presentation period on visual short-term memory function. Experiment 2 examined the effects of age, gender, and the length of the stimulus presentation period on visual short-term memory function. The worst memory performance and the largest performance difference between the age groups were observed in the shortest stimulus presentation period conditions. The performance difference between the age groups became smaller as the stimulus presentation period became longer; however, it did not completely disappear. Although gender did not have a significant effect on d' regardless of the presentation period in the young group, a significant gender-based difference was observed for stimulus presentation periods of 500 ms and 1,000 ms in the older group. This study indicates that the decline in visual short-term memory observed in the older group is due to the interaction of several factors.

  11. Visual Aids for Positive Behavior Support of Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Kidder, Jaimee E.; McDonnell, Andrea P.

    2017-01-01

    Research suggests that many children with ASD are visual learners (Quill, 1997) and may struggle to comprehend expectations presented in a verbal mode only. Visually structured interventions present choices, expectations, tasks, and communication exchanges in a way that is appealing and approachable for visual learners. There are many types of…

  12. Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.

    ERIC Educational Resources Information Center

    Shama, Gilli; Dreyfus, Tommy

    1994-01-01

    Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…

  13. Visual hallucinations in schizophrenia: confusion between imagination and perception.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2008-05-01

    An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.

  14. A test of the orthographic recoding hypothesis

    NASA Astrophysics Data System (ADS)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  15. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  16. Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.

    PubMed

    Yoshizaki, K

    2001-12-01

    The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.

  17. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  18. Stereoscopic applications for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  19. The Role of Sensory-Motor Information in Object Recognition: Evidence from Category-Specific Visual Agnosia

    ERIC Educational Resources Information Center

    Wolk, D.A.; Coslett, H.B.; Glosser, G.

    2005-01-01

    The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…

  20. [Diagnostic difficulties in a case of constricted tubular visual field].

    PubMed

    Dogaru, Oana-Mihaela; Rusu, Monica; Hâncu, Dacia; Horvath, Kárin

    2013-01-01

    In the paper below we present the clinical case of a 48 year old female with various symptoms associated with functional visual disturbance -constricted tubular visual fields, wich lasts from 6 years; the extensive clinical and paraclinical ophthalmological investigations ruled out the presence of an organic disorder. In the present, we suspect a diagnosis of hysteria, still uncertain, wich represented over time a big challenge in psychology and ophthalmology. The mechanisms and reasons for hysteria are still not clear and it could represent a fascinating research theme. The tunnel, spiral or star-shaped visual fields are specific findings in hysteria for patients who present visual disturbance. The question of whether or not a patient with hysterical visual impairment can or cannot "see" is still unresolved.

  1. Hemisphere division and its effect on selective attention: a generality examination of Lavie's load theory.

    PubMed

    Nishimura, Ritsuko; Yoshizaki, Kazuhito; Kato, Kimiko; Hatta, Takeshi

    2009-01-01

    The present study examined the role of visual presentation mode (unilateral vs. bilateral visual fields) on attentional modulation. We examined whether or not the presentation mode affects the compatibility effect, using a paradigm involving two task-relevant letter arrays. Sixteen participants identified a target letter among task-relevant letters while ignoring either a compatible or incompatible distracter letter that was presented to both hemispheres. Two letters arrays were presented to visual fields, either unilaterally or bilaterally. Results indicated that the compatibility effect was greater in bilateral than in unilateral visual field conditions. Findings support the assumption that the two hemispheres have separate attentional resources.

  2. Differential Age Effects on Spatial and Visual Working Memory

    ERIC Educational Resources Information Center

    Oosterman, Joukje M.; Morel, Sascha; Meijer, Lisette; Buvens, Cleo; Kessels, Roy P. C.; Postma, Albert

    2011-01-01

    The present study was intended to compare age effects on visual and spatial working memory by using two versions of the same task that differed only in presentation mode. The working memory task contained both a simultaneous and a sequential presentation mode condition, reflecting, respectively, visual and spatial working memory processes. Young…

  3. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Age-Related Changes in Temporal Allocation of Visual Attention: Evidence from the Rapid Serial Visual Presentation (RSVP) Paradigm

    ERIC Educational Resources Information Center

    Berger, Carole; Valdois, Sylviane; Lallier, Marie; Donnadieu, Sophie

    2015-01-01

    The present study explored the temporal allocation of attention in groups of 8-year-old children, 10-year-old children, and adults performing a rapid serial visual presentation task. In a dual-condition task, participants had to detect a briefly presented target (T2) after identifying an initial target (T1) embedded in a random series of…

  5. Data Presentation and Visualization (DPV) Interface Control Document

    NASA Technical Reports Server (NTRS)

    Mazzone, Rebecca A.; Conroy, Michael P.

    2015-01-01

    Data Presentation and Visualization (DPV) is a subset of the modeling and simulation (M&S) capabilities at Kennedy Space Center (KSC) that endeavors to address the challenges of how to present and share simulation output for analysts, stakeholders, decision makers, and other interested parties. DPV activities focus on the development and provision of visualization tools to meet the objectives identified above, as well as providing supporting tools and capabilities required to make its visualization products available and accessible across NASA.

  6. Prototype Stop Bar System Evaluation at John F. Kennedy International Airport

    DTIC Science & Technology

    1992-09-01

    2 Red Stop Bar Visual Presentation 4 3 Green Stop Bar Visual Presentation 5 4 Photographs of Red and Green Inset Stop Bar Lights 6 5 Photographs of...to green. This provides pilots with a visual confirmation of the controller’s verbal clearance and is intended to prevent runway incursions. The Port...34 colocated with the red lights. The visual presentation of an individual stop bar appears as either five red lights (see figure 2), or five green

  7. Predictive and postdictive mechanisms jointly contribute to visual awareness.

    PubMed

    Soga, Ryosuke; Akaishi, Rei; Sakai, Katsuyuki

    2009-09-01

    One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.

  8. Association of auditory-verbal and visual hallucinations with impaired and improved recognition of colored pictures.

    PubMed

    Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana

    2015-09-01

    A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).

  9. Remembering verbally-presented items as pictures: Brain activity underlying visual mental images in schizophrenia patients with visual hallucinations.

    PubMed

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Cuevas-Esteban, Jorge; Cambra-Martí, Maria Rosa; Ochoa, Susana; Brébion, Gildas

    2017-09-01

    Previous research suggests that visual hallucinations in schizophrenia consist of mental images mistaken for percepts due to failure of the reality-monitoring processes. However, the neural substrates that underpin such dysfunction are currently unknown. We conducted a brain imaging study to investigate the role of visual mental imagery in visual hallucinations. Twenty-three patients with schizophrenia and 26 healthy participants were administered a reality-monitoring task whilst undergoing an fMRI protocol. At the encoding phase, a mixture of pictures of common items and labels designating common items were presented. On the memory test, participants were requested to remember whether a picture of the item had been presented or merely its label. Visual hallucination scores were associated with a liberal response bias reflecting propensity to erroneously remember pictures of the items that had in fact been presented as words. At encoding, patients with visual hallucinations differentially activated the right fusiform gyrus when processing the words they later remembered as pictures, which suggests the formation of visual mental images. On the memory test, the whole patient group activated the anterior cingulate and medial superior frontal gyrus when falsely remembering pictures. However, no differential activation was observed in patients with visual hallucinations, whereas in the healthy sample, the production of visual mental images at encoding led to greater activation of a fronto-parietal decisional network on the memory test. Visual hallucinations are associated with enhanced visual imagery and possibly with a failure of the reality-monitoring processes that enable discrimination between imagined and perceived events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Visual disability, visual function, and myopia among rural chinese secondary school children: the Xichang Pediatric Refractive Error Study (X-PRES)--report 1.

    PubMed

    Congdon, Nathan; Wang, Yunfei; Song, Yue; Choi, Kai; Zhang, Mingzhi; Zhou, Zhongxia; Xie, Zhenling; Li, Liping; Liu, Xueyu; Sharma, Abhishek; Wu, Bin; Lam, Dennis S C

    2008-07-01

    To evaluate visual acuity, visual function, and prevalence of refractive error among Chinese secondary-school children in a cross-sectional school-based study. Uncorrected, presenting, and best corrected visual acuity, cycloplegic autorefraction with refinement, and self-reported visual function were assessed in a random, cluster sample of rural secondary school students in Xichang, China. Among the 1892 subjects (97.3% of the consenting children, 84.7% of the total sample), mean age was 14.7 +/- 0.8 years, 51.2% were female, and 26.4% were wearing glasses. The proportion of children with uncorrected, presenting, and corrected visual disability (< or = 6/12 in the better eye) was 41.2%, 19.3%, and 0.5%, respectively. Myopia < -0.5, < -2.0, and < -6.0 D in both eyes was present in 62.3%, 31.1%, and 1.9% of the subjects, respectively. Among the children with visual disability when tested without correction, 98.7% was due to refractive error, while only 53.8% (414/770) of these children had appropriate correction. The girls had significantly (P < 0.001) more presenting visual disability and myopia < -2.0 D than did the boys. More myopic refractive error was associated with worse self-reported visual function (ANOVA trend test, P < 0.001). Visual disability in this population was common, highly correctable, and frequently uncorrected. The impact of refractive error on self-reported visual function was significant. Strategies and studies to understand and remove barriers to spectacle wear are needed.

  11. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    PubMed

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  12. Developmental Changes in the Visual Span for Reading

    PubMed Central

    Kwon, MiYoung; Legge, Gordon E.; Dubbels, Brock R.

    2007-01-01

    The visual span for reading refers to the range of letters, formatted as in text, that can be recognized reliably without moving the eyes. It is likely that the size of the visual span is determined primarily by characteristics of early visual processing. It has been hypothesized that the size of the visual span imposes a fundamental limit on reading speed (Legge, Mansfield, & Chung, 2001). The goal of the present study was to investigate developmental changes in the size of the visual span in school-age children, and the potential impact of these changes on children’s reading speed. The study design included groups of 10 children in 3rd, 5th, and 7th grade, and 10 adults. Visual span profiles were measured by asking participants to recognize letters in trigrams (random strings of three letters) flashed for 100 ms at varying letter positions left and right of the fixation point. Two print sizes (0.25° and 1.0°) were used. Over a block of trials, a profile was built up showing letter recognition accuracy (% correct) versus letter position. The area under this profile was defined to be the size of the visual span. Reading speed was measured in two ways: with Rapid Serial Visual Presentation (RSVP) and with short blocks of text (termed Flashcard presentation). Consistent with our prediction, we found that the size of the visual span increased linearly with grade level and it was significantly correlated with reading speed for both presentation methods. Regression analysis using the size of the visual span as a predictor indicated that 34% to 52% of variability in reading speeds can be accounted for by the size of the visual span. These findings are consistent with a significant role of early visual processing in the development of reading skills. PMID:17845810

  13. Examining competing hypotheses for the effects of diagrams on recall for text.

    PubMed

    Ortegren, Francesca R; Serra, Michael J; England, Benjamin D

    2015-01-01

    Supplementing text-based learning materials with diagrams typically increases students' free recall and cued recall of the presented information. In the present experiments, we examined competing hypotheses for why this occurs. More specifically, although diagrams are visual, they also serve to repeat information from the text they accompany. Both visual presentation and repetition are known to aid students' recall of information. To examine to what extent diagrams aid recall because they are visual or repetitive (or both), we had college students in two experiments (n = 320) read a science text about how lightning storms develop before completing free-recall and cued-recall tests over the presented information. Between groups, we manipulated the format and repetition of target pieces of information in the study materials using a 2 (visual presentation of target information: diagrams present vs. diagrams absent) × 2 (repetition of target information: present vs. absent) between-participants factorial design. Repetition increased both the free recall and cued recall of target information, and this occurred regardless of whether that repetition was in the form of text or a diagram. In contrast, the visual presentation of information never aided free recall. Furthermore, visual presentation alone did not significantly aid cued recall when participants studied the materials once before the test (Experiment 1) but did when they studied the materials twice (Experiment 2). Taken together, the results of the present experiments demonstrate the important role of repetition (i.e., that diagrams repeat information from the text) over the visual nature of diagrams in producing the benefits of diagrams for recall.

  14. Remembering the Specific Visual Details of Presented Objects: Neuroimaging Evidence for Effects of Emotion

    ERIC Educational Resources Information Center

    Kensinger, Elizabeth A.; Schacter, Daniel L.

    2007-01-01

    Memories can be retrieved with varied amounts of visual detail, and the emotional content of information can influence the likelihood that visual detail is remembered. In the present fMRI experiment (conducted with 19 adults scanned using a 3T magnet), we examined the neural processes that correspond with recognition of the visual details of…

  15. "The Mask Who Wasn't There": Visual Masking Effect with the Perceptual Absence of the Mask

    ERIC Educational Resources Information Center

    Rey, Amandine Eve; Riou, Benoit; Muller, Dominique; Dabic, Stéphanie; Versace, Rémy

    2015-01-01

    Does a visual mask need to be perceptually present to disrupt processing? In the present research, we proposed to explore the link between perceptual and memory mechanisms by demonstrating that a typical sensory phenomenon (visual masking) can be replicated at a memory level. Experiment 1 highlighted an interference effect of a visual mask on the…

  16. Information Visualization and Proposing New Interface for Movie Retrieval System (IMDB)

    ERIC Educational Resources Information Center

    Etemadpour, Ronak; Masood, Mona; Belaton, Bahari

    2010-01-01

    This research studies the development of a new prototype of visualization in support of movie retrieval. The goal of information visualization is unveiling of large amounts of data or abstract data set using visual presentation. With this knowledge the main goal is to develop a 2D presentation of information on movies from the IMDB (Internet Movie…

  17. The Crossmodal Facilitation of Visual Object Representations by Sound: Evidence from the Backward Masking Paradigm

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results…

  18. Auditory enhancement of visual perception at threshold depends on visual abilities.

    PubMed

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Different Strokes for Different Folks: Visual Presentation Design between Disciplines

    PubMed Central

    Gomez, Steven R.; Jianu, Radu; Ziemkiewicz, Caroline; Guo, Hua; Laidlaw, David H.

    2015-01-01

    We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard “chalk talks”. We found design differences in slideshows using two methods – coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant’s own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information. PMID:26357149

  20. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    PubMed

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  1. Different Strokes for Different Folks: Visual Presentation Design between Disciplines.

    PubMed

    Gomez, S R; Jianu, R; Ziemkiewicz, C; Guo, Hua; Laidlaw, D H

    2012-12-01

    We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard "chalk talks". We found design differences in slideshows using two methods - coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant's own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information.

  2. Risk Factors for Visual Impairment in an Uninsured Population and the Impact of the Affordable Care Act.

    PubMed

    Guo, Weixia; Woodward, Maria A; Heisler, Michele; Blachley, Taylor; Corneail, Leah; Cederna, Jean; Kaplan, Ariane D; Newman Casey, Paula Anne

    2016-01-01

    To assess risk factors for visual impairment in a high-risk population of people: those without medical insurance. Secondarily, we assessed risk factors for remaining uninsured after implementation of the Affordable Care Act (ACA) and evaluated whether the ACA changed demand for local safety net ophthalmology clinic services one year after its implementation. In a retrospective cohort study of patients who attended a community-academic partnership free ophthalmology clinic in Southeastern, Michigan between September 2012 - March 2015, we assessed the prevalence of presenting with visual impairment, the most common causes of presenting with visual impairment and used logistic regression to assess socio-demographic risk factors for visual impairment. We assessed the initial impact of the ACA on clinic utilization. We also analyzed risk factors for remaining uninsured one year after implementation of the ACA private insurance marketplace and Medicaid expansion in the state of Michigan. Among 335 patients, one-fifth (22%) presented with visual impairment; refractive error was the leading cause for presenting with visual impairment. Unemployment was the single significant risk factor for presenting with visual impairment after adjusting for multiple confounding factors (OR = 3.05, 95% CI 1.19-7.87, p=0.01). There was no difference in proportion of visual impairment or type of vision-threatening disease between the insured and uninsured (p=0.26). Seventy six percent of patients remained uninsured one year after ACA implementation. Patients who were white, spoke English as a first language and were US Citizens were more likely to gain insurance coverage through the ACA in our population (p≤ 0.01). There was a non-significant decline in the mean number of patient treated per clinic (52 to 43) before and after ACA implementation (p=0.69). Refractive error was a leading cause for presenting with visual impairment in this vulnerable population, and being unemployed significantly increased the risk for presenting with visual impairment. The ACA did not significantly reduce the need for our free ophthalmology services. It is critically important to continue to support safety net specialty care initiatives and policy change to provide care for those in need.

  3. Risk Factors for Visual Impairment in an Uninsured Population and the Impact of the Affordable Care Act

    PubMed Central

    Guo, Weixia; Woodward, Maria A; Heisler, Michele; Blachley, Taylor; Corneail, Leah; Cederna, Jean; Kaplan, Ariane D; Newman Casey, Paula Anne

    2017-01-01

    Purpose To assess risk factors for visual impairment in a high-risk population of people: those without medical insurance. Secondarily, we assessed risk factors for remaining uninsured after implementation of the Affordable Care Act (ACA) and evaluated whether the ACA changed demand for local safety net ophthalmology clinic services one year after its implementation. Methods In a retrospective cohort study of patients who attended a community-academic partnership free ophthalmology clinic in Southeastern, Michigan between September 2012 – March 2015, we assessed the prevalence of presenting with visual impairment, the most common causes of presenting with visual impairment and used logistic regression to assess socio-demographic risk factors for visual impairment. We assessed the initial impact of the ACA on clinic utilization. We also analyzed risk factors for remaining uninsured one year after implementation of the ACA private insurance marketplace and Medicaid expansion in the state of Michigan. Results Among 335 patients, one-fifth (22%) presented with visual impairment; refractive error was the leading cause for presenting with visual impairment. Unemployment was the single significant risk factor for presenting with visual impairment after adjusting for multiple confounding factors (OR = 3.05, 95% CI 1.19–7.87, p=0.01). There was no difference in proportion of visual impairment or type of vision-threatening disease between the insured and uninsured (p=0.26). Seventy six percent of patients remained uninsured one year after ACA implementation. Patients who were white, spoke English as a first language and were US Citizens were more likely to gain insurance coverage through the ACA in our population (p≤ 0.01). There was a non-significant decline in the mean number of patient treated per clinic (52 to 43) before and after ACA implementation (p=0.69). Conclusion Refractive error was a leading cause for presenting with visual impairment in this vulnerable population, and being unemployed significantly increased the risk for presenting with visual impairment. The ACA did not significantly reduce the need for our free ophthalmology services. It is critically important to continue to support safety net specialty care initiatives and policy change to provide care for those in need. PMID:28593201

  4. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  5. Social media interruption affects the acquisition of visually, not aurally, acquired information during a pathophysiology lecture.

    PubMed

    Marone, Jane R; Thakkar, Shivam C; Suliman, Neveen; O'Neill, Shannon I; Doubleday, Alison F

    2018-06-01

    Poor academic performance from extensive social media usage appears to be due to students' inability to multitask between distractions and academic work. However, the degree to which visually distracted students can acquire lecture information presented aurally is unknown. This study examined the ability of students visually distracted by social media to acquire information presented during a voice-over PowerPoint lecture, and to compare performance on examination questions derived from information presented aurally vs. that presented visually. Students ( n = 20) listened to a 42-min cardiovascular pathophysiology lecture containing embedded cartoons while taking notes. The experimental group ( n = 10) was visually, but not aurally, distracted by social media during times when cartoon information was presented, ~40% of total lecture time. Overall performance among distracted students on a follow-up, open-note quiz was 30% poorer than that for controls ( P < 0.001). When the modality of presentation (visual vs. aural) was compared, performance decreased on examination questions from information presented visually. However, performance on questions from information presented aurally was similar to that of controls. Our findings suggest the ability to acquire information during lecture may vary, depending on the degree of competition between the modalities of the distraction and the lecture presentation. Within the context of current literature, our findings also suggest that timing of the distraction relative to delivery of material examined affects performance more than total distraction time. Therefore, when delivering lectures, instructors should incorporate organizational cues and active learning strategies that assist students in maintaining focus and acquiring relevant information.

  6. EventSlider User Manual

    DTIC Science & Technology

    2016-09-01

    is a Windows Presentation Foundation (WPF) control developed using the .NET framework in Microsoft Visual Studio. As a WPF control, it can be used in...any WPF application as a graphical visual element. The purpose of the control is to visually display time-related events as vertical lines on a...available on the control. 15. SUBJECT TERMS Windows Presentation Foundation, WPF, control, C#, .NET framework, Microsoft Visual Studio 16. SECURITY

  7. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  8. Long-term visual outcomes of craniopharyngioma in children.

    PubMed

    Wan, Michael J; Zapotocky, Michal; Bouffet, Eric; Bartels, Ute; Kulkarni, Abhaya V; Drake, James M

    2018-05-01

    Visual function is a critical factor in the diagnosis, monitoring, and prognosis of craniopharyngiomas in children. The aim of this study was to report the long-term visual outcomes in a cohort of pediatric patients with craniopharyngioma. The study design is a retrospective chart review of craniopharyngioma patients from a single tertiary-care pediatric hospital. 59 patients were included in the study. Mean age at presentation was 9.4 years old (range 0.7-18.0 years old). The most common presenting features were headache (76%), nausea/vomiting (32%), and vision loss (31%). Median follow-up was 5.2 years (range 1.0-17.2 years). During follow-up, visual decline occurred in 17 patients (29%). On Kaplan Meier survival analysis, 47% of the cases of visual decline occurred within 4 months of diagnosis, with the remaining cases occurring sporadically during follow-up (up to 8 years after diagnosis). In terms of risk factors, younger age at diagnosis, optic nerve edema at presentation, and tumor recurrence were found to have statistically significant associations with visual decline. At final follow-up, 58% of the patients had visual impairment in at least one eye but only 10% were legally blind in both eyes (visual acuity 20/200 or worse or < 20° of visual field). Vision loss is a common presenting symptom of craniopharyngiomas in children. After diagnosis, monitoring vision is important as about 30% of patients will experience significant visual decline. Long-term vision loss occurs in the majority of patients, but severe binocular visual impairment is uncommon.

  9. Effects of using visualization and animation in presentations to communities about forest succession and fire behavior potential

    Treesearch

    Jane Kapler Smith; Donald E. Zimmerman; Carol Akerelrea; Garrett O' Keefe

    2008-01-01

    Natural resource managers use a variety of computer-mediated presentation methods to communicate management practices to the public. We explored the effects of using the Stand Visualization System to visualize and animate predictions from the Forest Vegetation Simulator-Fire and Fuels Extension in presentations explaining forest succession (forest growth and change...

  10. Visual communication in presentation on physics

    NASA Astrophysics Data System (ADS)

    Grebenyuk, Konstantin A.

    2005-06-01

    It is essential that our audience be attentive during lecture, report or another presentation on physics. Therefore we have to take care of both speech and visual communication with audience. Three important aspects of successful visual aids use are singled out in this paper. The main idea is that physicists could appreciably increase efficiency of their presentations by use of these simple principles of presentation art. Recommendations offered are results of special literature research, author' s observations and experience of communication with skilled masters of presentations.

  11. Clinical Profile and Visual Outcome of Ocular Bartonellosis in Malaysia

    PubMed Central

    Tan, Chai Lee; Fhun, Lai Chan; Abdul Gani, Nor Hasnida; Muhammed, Julieana; Tuan Jaafar, Tengku Norina

    2017-01-01

    Background. Ocular bartonellosis can present in various ways, with variable visual outcome. There is limited data on ocular bartonellosis in Malaysia. Objective. We aim to describe the clinical presentation and visual outcome of ocular bartonellosis in Malaysia. Materials and Methods. This was a retrospective review of patients treated for ocular bartonellosis in two ophthalmology centers in Malaysia between January 2013 and December 2015. The diagnosis was based on clinical features, supported by a positive Bartonella spp. serology. Results. Of the 19 patients in our series, females were predominant (63.2%). The mean age was 29.3 years. The majority (63.2%) had unilateral involvement. Five patients (26.3%) had a history of contact with cats. Neuroretinitis was the most common presentation (62.5%). Azithromycin was the antibiotic of choice (42.1%). Concurrent systemic corticosteroids were used in approximately 60% of cases. The presenting visual acuity was worse than 6/18 in approximately 60% of eyes; on final review, 76.9% of eyes had a visual acuity better than 6/18. Conclusion. Ocular bartonellosis tends to present with neuroretinitis. Azithromycin is a viable option for treatment. Systemic corticosteroids may be considered in those with poor visual acuity on presentation. PMID:28265290

  12. Explaining the Colavita visual dominance effect.

    PubMed

    Spence, Charles

    2009-01-01

    The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.

  13. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  14. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    PubMed

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.

  15. Ultra-Rapid serial visual presentation reveals dynamics of feedforward and feedback processes in the ventral visual pathway.

    PubMed

    Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios

    2018-06-21

    Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.

  16. Clinical presentation and visual status of retinitis pigmentosa patients: a multicenter study in southwestern Nigeria.

    PubMed

    Onakpoya, Oluwatoyin Helen; Adeoti, Caroline Olufunlayo; Oluleye, Tunji Sunday; Ajayi, Iyiade Adeseye; Majengbasan, Timothy; Olorundare, Olayemi Kolawole

    2016-01-01

    To review the visual status and clinical presentation of patients with retinitis pigmentosa (RP). Multicenter, retrospective, and analytical review was conducted of the visual status and clinical characteristics of patients with RP at first presentation from January 2007 to December 2011. Main outcome measure was the World Health Organization's visual status classification in relation to sex and age at presentation. Data analysis by SPSS (version 15) and statistical significance was assumed at P<0.05. One hundred and ninety-two eyes of 96 patients with mean age of 39.08±18.5 years and mode of 25 years constituted the study population; 55 (57.3%) were males and 41 (42.7%) females. Loss of vision 67 (69.8%) and night blindness 56 (58.3%) were the leading symptoms. Twenty-one (21.9%) patients had a positive family history, with RP present in their siblings 15 (71.4%), grandparents 11 (52.3%), and parents 4 (19.4%). Forty (41.7%) were blind at presentation and 23 (24%) were visually impaired. Blindness in six (15%) patients was secondary to glaucoma. Retinal vascular narrowing and retinal pigmentary changes of varying severity were present in all patients. Thirty-five (36.5%) had maculopathy, 36 (37.5%) refractive error, 19 (20%) lenticular opacities, and eleven (11.5%) had glaucoma. RP was typical in 85 patients (88.5%). Older patients had higher rates of blindness at presentation (P=0.005); blindness and visual impairment rate at presentation were higher in males than females (P=0.029). Clinical presentation with advanced diseases, higher blindness rate in older patients, sex-related difference in blindness/visual impairment rates, as well as high glaucoma blindness in RP patients requires urgent attention in southwestern Nigeria.

  17. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  18. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  19. Bringing color to emotion: The influence of color on attentional bias to briefly presented emotional images.

    PubMed

    Bekhtereva, Valeria; Müller, Matthias M

    2017-10-01

    Is color a critical feature in emotional content extraction and involuntary attentional orienting toward affective stimuli? Here we used briefly presented emotional distractors to investigate the extent to which color information can influence the time course of attentional bias in early visual cortex. While participants performed a demanding visual foreground task, complex unpleasant and neutral background images were displayed in color or grayscale format for a short period of 133 ms and were immediately masked. Such a short presentation poses a challenge for visual processing. In the visual detection task, participants attended to flickering squares that elicited the steady-state visual evoked potential (SSVEP), allowing us to analyze the temporal dynamics of the competition for processing resources in early visual cortex. Concurrently we measured the visual event-related potentials (ERPs) evoked by the unpleasant and neutral background scenes. The results showed (a) that the distraction effect was greater with color than with grayscale images and (b) that it lasted longer with colored unpleasant distractor images. Furthermore, classical and mass-univariate ERP analyses indicated that, when presented in color, emotional scenes elicited more pronounced early negativities (N1-EPN) relative to neutral scenes, than when the scenes were presented in grayscale. Consistent with neural data, unpleasant scenes were rated as being more emotionally negative and received slightly higher arousal values when they were shown in color than when they were presented in grayscale. Taken together, these findings provide evidence for the modulatory role of picture color on a cascade of coordinated perceptual processes: by facilitating the higher-level extraction of emotional content, color influences the duration of the attentional bias to briefly presented affective scenes in lower-tier visual areas.

  20. Top-down preparation modulates visual categorization but not subjective awareness of objects presented in natural backgrounds.

    PubMed

    Koivisto, Mika; Kahila, Ella

    2017-04-01

    Top-down processes are widely assumed to be essential in visual awareness, subjective experience of seeing. However, previous studies have not tried to separate directly the roles of different types of top-down influences in visual awareness. We studied the effects of top-down preparation and object substitution masking (OSM) on visual awareness during categorization of objects presented in natural scene backgrounds. The results showed that preparation facilitated categorization but did not influence visual awareness. OSM reduced visual awareness and impaired categorization. The dissociations between the effects of preparation and OSM on visual awareness and on categorization imply that they influence at different stages of cognitive processing. We propose that preparation influences at the top of the visual hierarchy, whereas OSM interferes with processes occurring at lower levels of the hierarchy. These lower level processes play an essential role in visual awareness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  2. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  3. Repetition blindness and illusory conjunctions: errors in binding visual types with visual tokens.

    PubMed

    Kanwisher, N

    1991-05-01

    Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.

  4. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  5. The Effects of Verbal Elaboration and Visual Elaboration on Student Learning.

    ERIC Educational Resources Information Center

    Chanlin, Lih-Juan

    1997-01-01

    This study examined: (1) the effectiveness of integrating verbal elaboration (metaphors) and different visual presentation strategies (still and animated graphics) in learning biotechnology concepts; (2) whether the use of verbal elaboration with different visual presentation strategies facilitates cognitive processes; and (3) how students employ…

  6. Childhood visual impairment: normal and abnormal visual function in the context of developmental disability.

    PubMed

    Nyong'o, Omondi L; Del Monte, Monte A

    2008-12-01

    Abnormal or failed development of vision in children may give rise to varying degrees of visual impairment and disability. Disease and organ-specific mechanisms by which visual impairments arise are presented. The presentation of these mechanisms, along with an explanation of established pathologic processes and correlative up-to-date clinical and social research in the field of pediatrics, ophthalmology, and rehabilitation medicine are discussed. The goal of this article is to enhance the practitioner's recognition and care for children with developmental disability associated with visual impairment.

  7. Imagery and Visual Literacy: Selected Readings from the Annual Conference of the International Visual Literacy Association (26th, Tempe, Arizona, October 12-16, 1994).

    ERIC Educational Resources Information Center

    Beauchamp, Darrell G.; And Others

    This document contains selected conference papers all relating to visual literacy. The topics include: process issues in visual literacy; interpreting visual statements; what teachers need to know; multimedia presentations; distance education materials for correctional use; visual culture; audio-visual interaction in desktop multimedia; the…

  8. The effect of linguistic and visual salience in visual world studies.

    PubMed

    Cavicchio, Federica; Melcher, David; Poesio, Massimo

    2014-01-01

    Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.

  9. Electrophysiological evidence for Audio-visuo-lingual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc

    2018-01-31

    Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Progressive posterior cortical dysfunction

    PubMed Central

    Porto, Fábio Henrique de Gobbi; Machado, Gislaine Cristina Lopes; Morillo, Lilian Schafirovits; Brucki, Sonia Maria Dozzi

    2010-01-01

    Progressive posterior cortical dysfunction (PPCD) is an insidious syndrome characterized by prominent disorders of higher visual processing. It affects both dorsal (occipito-parietal) and ventral (occipito-temporal) pathways, disturbing visuospatial processing and visual recognition, respectively. We report a case of a 67-year-old woman presenting with progressive impairment of visual functions. Neurologic examination showed agraphia, alexia, hemispatial neglect (left side visual extinction), complete Balint’s syndrome and visual agnosia. Magnetic resonance imaging showed circumscribed atrophy involving the bilateral parieto-occipital regions, slightly more predominant to the right. Our aim was to describe a case of this syndrome, to present a video showing the main abnormalities, and to discuss this unusual presentation of dementia. We believe this article can contribute by improving the recognition of PPCD. PMID:29213665

  11. Dual Coding in Children.

    ERIC Educational Resources Information Center

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  12. Spatial Working Memory Effects in Early Visual Cortex

    ERIC Educational Resources Information Center

    Munneke, Jaap; Heslenfeld, Dirk J.; Theeuwes, Jan

    2010-01-01

    The present study investigated how spatial working memory recruits early visual cortex. Participants were required to maintain a location in working memory while changes in blood oxygen level dependent (BOLD) signals were measured during the retention interval in which no visual stimulation was present. We show working memory effects during the…

  13. Automating Geospatial Visualizations with Smart Default Renderers for Data Exploration Web Applications

    NASA Astrophysics Data System (ADS)

    Ekenes, K.

    2017-12-01

    This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.

  14. Visual communication of engineering and scientific data in the courtroom

    NASA Astrophysics Data System (ADS)

    Jackson, Gerald W.; Henry, Andrew C.

    1993-01-01

    Presenting engineering and scientific information in the courtroom is challenging. Quite often the data is voluminous and, therefore, difficult to digest by engineering experts, let alone a lay judge, lawyer, or jury. This paper discusses computer visualization techniques designed to provide the court methods of communicating data in visual formats thus allowing a more accurate understanding of complicated concepts and results. Examples are presented that include accident reconstructions, technical concept illustration, and engineering data visualization. Also presented is the design of an electronic courtroom which facilitates the display and communication of information to the courtroom.

  15. The role of prestimulus activity in visual extinction☆

    PubMed Central

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-01-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398

  16. The role of prestimulus activity in visual extinction.

    PubMed

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-07-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Macular pigment and visual performance in glare: benefits for photostress recovery, disability glare, and visual discomfort.

    PubMed

    Stringham, James M; Garcia, Paul V; Smith, Peter A; McLin, Leon N; Foutch, Brian K

    2011-09-22

    One theory of macular pigment's (MP) presence in the fovea is to improve visual performance in glare. This study sought to determine the effect of MP level on three aspects of visual performance in glare: photostress recovery, disability glare, and visual discomfort. Twenty-six subjects participated in the study. Spatial profiles of MP optical density were assessed with heterochromatic flicker photometry. Glare was delivered via high-bright-white LEDs. For the disability glare and photostress recovery portions of the experiment, the visual task consisted of correct identification of a 1° Gabor patch's orientation. Visual discomfort during the glare presentation was assessed with a visual discomfort rating scale. Pupil diameter was monitored with an infrared (IR) camera. MP level correlated significantly with all the outcome measures. Higher MP optical densities (MPODs) resulted in faster photostress recovery times (average P < 0.003), lower disability glare contrast thresholds (average P < 0.004), and lower visual discomfort (P = 0.002). Smaller pupil diameter during glare presentation significantly correlated with higher visual discomfort ratings (P = 0.037). MP correlates with three aspects of visual performance in glare. Unlike previous studies of MP and glare, the present study used free-viewing conditions, in which effects of iris pigmentation and pupil size could be accounted for. The effects described, therefore, can be extended more confidently to real-world, practical visual performance benefits. Greater iris constriction resulted (paradoxically) in greater visual discomfort. This finding may be attributable to the neurobiologic mechanism that mediates the pain elicited by light.

  18. Visual Communication: Its Process and Effects.

    ERIC Educational Resources Information Center

    Metallinos, Nikos

    The process and effects of visual communication are examined in this paper. The first section, "Visual Literacy," discusses the need for a visual literacy involving an understanding of the instruments, materials, and techniques of visual communication media; it then presents and discusses a model illustrating factors involved in the…

  19. Examining the cognitive demands of analogy instructions compared to explicit instructions.

    PubMed

    Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich

    2016-10-01

    In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.

  20. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    PubMed Central

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  1. A hierarchical, retinotopic proto-organization of the primate visual system at birth

    PubMed Central

    Arcaro, Michael J; Livingstone, Margaret S

    2017-01-01

    The adult primate visual system comprises a series of hierarchically organized areas. Each cortical area contains a topographic map of visual space, with different areas extracting different kinds of information from the retinal input. Here we asked to what extent the newborn visual system resembles the adult organization. We find that hierarchical, topographic organization is present at birth and therefore constitutes a proto-organization for the entire primate visual system. Even within inferior temporal cortex, this proto-organization was already present, prior to the emergence of category selectivity (e.g., faces or scenes). We propose that this topographic organization provides the scaffolding for the subsequent development of visual cortex that commences at the onset of visual experience DOI: http://dx.doi.org/10.7554/eLife.26196.001 PMID:28671063

  2. Visual impairment and spectacle use in schoolchildren in rural and urban regions in Beijing.

    PubMed

    Guo, Yin; Liu, Li Juan; Xu, Liang; Lv, Yan Yun; Tang, Ping; Feng, Yi; Meng, Lei; Jonas, Jost B

    2014-01-01

    To determine prevalence and associations of visual impairment and frequency of spectacle use among grade 1 and grade 4 students in Beijing. This school-based, cross-sectional study included 382 grade 1 children (age 6.3 ± 0.5 years) and 299 grade 4 children (age 9.4 ± 0.7 years) who underwent a comprehensive eye examination including visual acuity, noncycloplegic refractometry, and ocular biometry. Presenting visual acuity (mean 0.04 ± 0.17 logMAR) was associated with younger age (p = 0.002), hyperopic refractive error (p<0.001), and male sex (p = 0.03). Presenting visual impairment (presenting visual acuity ≤20/40 in the better eye) was found in 44 children (prevalence 6.64 ± 1.0% [95% confidence interval (CI) 4.74, 8.54]). Mean best-corrected visual acuity (right eyes -0.02 ± 0.04 logMAR) was associated with more hyperopic refractive error (p = 0.03) and rural region of habitation (p<0.001). The prevalence of best-corrected visual impairment (best-corrected visual acuity ≤20/40 in the better eye) was 2/652 (0.30 ± 0.21% [95% CI 0.00, 0.72]). Undercorrection of refractive error was present in 53 children (7.99 ± 1.05%) and was associated with older age (p = 0.003; B 0.53; OR 1.71 [95% CI 1.20, 2.42]), myopic refractive error (p = 0.001; B -0.72; OR 0.49 [95% CI 0.35, 0.68]), and longer axial length (p = 0.002; B 0.74; OR 2.10 [95% CI 1.32, 3.32]). Spectacle use was reported for 54 children (8.14 ± 1.06%). Mean refractive error of the worse eyes of these children was -2.09 ± 2.88 D (range -7.38 to +7.25 D). Factors associated with presenting visual impairment were older age, myopic refractive error, and higher maternal education level. Despite a prevalence of myopia of 33% in young schoolchildren in Greater Beijing, prevalence of best-corrected visual impairment (0.30% ± 0.21%), presenting visual impairment (6.64% ± 1.0%), and undercorrection of refractive error (7.99% ± 1.05%) were relatively low.

  3. Listeners' expectation of room acoustical parameters based on visual cues

    NASA Astrophysics Data System (ADS)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.

  4. Five-year study of ocular injuries due to fireworks in India.

    PubMed

    Malik, Archana; Bhala, Soniya; Arya, Sudesh K; Sood, Sunandan; Narang, Subina

    2013-08-01

    To study the demographic profile, cause, type and severity of ocular injuries, their complications and final visual outcome following fireworks around the time of Deepawali in India. Case records of patients who presented with firework-related injuries during 2005-2009 at the time of Deepawali were reviewed. Data with respect to demographic profile of patients, cause and time of injury, time of presentation and types of intervention were analyzed. Visual acuity at presentation and final follow-up, anterior and posterior segment findings, and any diagnostic and surgical interventions carried out were noted. One hundred and one patients presented with firework-related ocular injuries, of which 77.5 % were male. The mean age was 17.60 ± 11.9 years, with 54 % being ≤14 years of age. The mean time of presentation was 8.9 h. Seventeen patients had open globe injury (OGI) and 84 had closed globe injury (CGI). Fountains were the most common cause of CGI and bullet bombs were the most common cause of OGI. Mean log MAR visual acuity at presentation was 0.64 and 1.22 and at last follow-up was 0.09 and 0.58 for CGI and OGI, respectively (p < 0.05). Patients with CGI had a better visual outcome. Three patients with OGI developed permanent blindness. Factors associated with poor visual outcome included poor initial visual acuity, OGI, intraocular foreign body (IOFB), retinal detachment and development of endophthalmitis. Firework injuries were seen mostly in males and children. Poor visual outcome was associated with poor initial visual acuity, OGI, IOFB, retinal detachment and development of endophthalmitis, while most patients with CGI regained good vision.

  5. Visualizing without Vision at the Microscale: Students with Visual Impairments Explore Cells with Touch

    ERIC Educational Resources Information Center

    Jones, M. Gail; Minogue, James; Oppewal, Tom; Cook, Michelle P.; Broadwell, Bethany

    2006-01-01

    Science instruction is typically highly dependent on visual representations of scientific concepts that are communicated through textbooks, teacher presentations, and computer-based multimedia materials. Little is known about how students with visual impairments access and interpret these types of visually-dependent instructional materials. This…

  6. Metabolic Pathways Visualization Skills Development by Undergraduate Students

    ERIC Educational Resources Information Center

    dos Santos, Vanessa J. S. V.; Galembeck, Eduardo

    2015-01-01

    We have developed a metabolic pathways visualization skill test (MPVST) to gain greater insight into our students' abilities to comprehend the visual information presented in metabolic pathways diagrams. The test is able to discriminate students' visualization ability with respect to six specific visualization skills that we identified as key to…

  7. Visualization as an Aid to Problem-Solving: Examples from History.

    ERIC Educational Resources Information Center

    Rieber, Lloyd P.

    This paper presents a historical overview of visualization as a human problem-solving tool. Visualization strategies, such as mental imagery, pervade historical accounts of scientific discovery and invention. A selected number of historical examples are presented and discussed on a wide range of topics such as physics, aviation, and the science of…

  8. Effective Engineering Presentations through Teaching Visual Literacy Skills.

    ERIC Educational Resources Information Center

    Kerns, H. Dan; And Others

    This paper describes a faculty resource team in the Bradley University (Illinois) Department of Industrial Engineering that works with student project teams in an effort to improve their visualization and oral presentation skills. Students use state of the art technology to develop and display their visuals. In addition to technology, students are…

  9. Presentation Technology in the Age of Electronic Eloquence: From Visual Aid to Visual Rhetoric

    ERIC Educational Resources Information Center

    Cyphert, Dale

    2007-01-01

    Attention to presentation technology in the public speaking classroom has grown along with its contemporary use, but instruction generally positions the topic as a subset of visual aids. As contemporary public discourse enters an age of electronic eloquence, instructional focus on verbal communication might limit students' capacity to effectively…

  10. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  11. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...

  12. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...

  13. Social Studies for the Visually Impaired Child. MAVIS Sourcebook 4.

    ERIC Educational Resources Information Center

    Singleton, Laurel R.

    Suggestions are made in this sourcebook for adapting teaching strategies and curriculum materials in social studies to accomodate the needs of the visually impaired (VI) student. It is presented in eight chapters. Chapter one explains why elementary grade social studies, with its emphasis on visual media, presents difficulties for VI children.…

  14. Query2Question: Translating Visualization Interaction into Natural Language.

    PubMed

    Nafari, Maryam; Weaver, Chris

    2015-06-01

    Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.

  15. The Presentation: A New Genre in Business Communication.

    ERIC Educational Resources Information Center

    Carney, Thomas F.

    1992-01-01

    Discusses the value and importance of presentation graphics. Deals with using storyboards to design presentations, design principles and construction guidelines, subliminals (overtext, intertextuality, and color), choosing a medium for visuals, choosing a computer program to generate visuals, and design similarities between presentation visuals…

  16. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    PubMed

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  17. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  18. Effects of body lean and visual information on the equilibrium maintenance during stance.

    PubMed

    Duarte, Marcos; Zatsiorsky, Vladimir M

    2002-09-01

    Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.

  19. Changes in the distribution of sustained attention alter the perceived structure of visual space.

    PubMed

    Fortenbaugh, Francesca C; Robertson, Lynn C; Esterman, Michael

    2017-02-01

    Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field. Published by Elsevier Ltd.

  20. Data Visualization and Storytelling: Students Showcasing Innovative Work on the NASA Hyperwall

    NASA Astrophysics Data System (ADS)

    Hankin, E. R.; Hasan, M.; Williams, B. M.; Harwell, D. E.

    2017-12-01

    Visual storytelling can be used to quickly and effectively tell a story about data and scientific research, with powerful visuals driving a deeper level of engagement. In 2016, the American Geophysical Union (AGU) launched a pilot contest with a grant from NASA to fund students to travel to the AGU Fall Meeting to present innovative data visualizations with fascinating stories on the NASA Hyperwall. This presentation will discuss the purpose of the contest and provide highlights. Additionally, the presentation will feature Mejs Hasan, one of the 2016 contest grand prize winners, who will discuss her award-winning research utilizing Landsat visual data, MODIS Enhanced Vegetation Index data, and NOAA nightlight data to study the effects of both drought and war on the Middle East.

  1. CTViz: A tool for the visualization of transport in nanocomposites.

    PubMed

    Beach, Benjamin; Brown, Joshua; Tarlton, Taylor; Derosa, Pedro A

    2016-05-01

    A visualization tool (CTViz) for charge transport processes in 3-D hybrid materials (nanocomposites) was developed, inspired by the need for a graphical application to assist in code debugging and data presentation of an existing in-house code. As the simulation code grew, troubleshooting problems grew increasingly difficult without an effective way to visualize 3-D samples and charge transport in those samples. CTViz is able to produce publication and presentation quality visuals of the simulation box, as well as static and animated visuals of the paths of individual carriers through the sample. CTViz was designed to provide a high degree of flexibility in the visualization of the data. A feature that characterizes this tool is the use of shade and transparency levels to highlight important details in the morphology or in the transport paths by hiding or dimming elements of little relevance to the current view. This is fundamental for the visualization of 3-D systems with complex structures. The code presented here provides these required capabilities, but has gone beyond the original design and could be used as is or easily adapted for the visualization of other particulate transport where transport occurs on discrete paths. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear.

    PubMed

    Willems, Roel M; Clevis, Krien; Hagoort, Peter

    2011-09-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.

  3. Impact of visual acuity on developing literacy at age 4-5 years: a cohort-nested cross-sectional study.

    PubMed

    Bruce, Alison; Fairley, Lesley; Chambers, Bette; Wright, John; Sheldon, Trevor A

    2016-02-16

    To estimate the prevalence of poor vision in children aged 4-5 years and determine the impact of visual acuity on literacy. Cross-sectional study linking clinical, epidemiological and education data. Schools located in the city of Bradford, UK. Prevalence was determined for 11,186 children participating in the Bradford school vision screening programme. Data linkage was undertaken for 5836 Born in Bradford (BiB) birth cohort study children participating both in the Bradford vision screening programme and the BiB Starting Schools Programme. 2025 children had complete data and were included in the multivariable analyses. Visual acuity was measured using a logMAR Crowded Test (higher scores=poorer visual acuity). Literacy measured by Woodcock Reading Mastery Tests-Revised (WRMT-R) subtest: letter identification (standardised). The mean (SD) presenting visual acuity was 0.14 (0.09) logMAR (range 0.0-1.0). 9% of children had a presenting visual acuity worse than 0.2logMAR (failed vision screening), 4% worse than 0.3logMAR (poor visual acuity) and 2% worse than 0.4logMAR (visually impaired). Unadjusted analysis showed that the literacy score was associated with presenting visual acuity, reducing by 2.4 points for every 1 line (0.10logMAR) reduction in vision (95% CI -3.0 to -1.9). The association of presenting visual acuity with the literacy score remained significant after adjustment for demographic and socioeconomic factors reducing by 1.7 points (95% CI -2.2 to -1.1) for every 1 line reduction in vision. Prevalence of decreased visual acuity was high compared with other population-based studies. Decreased visual acuity at school entry is associated with reduced literacy. This may have important implications for the children's future educational, health and social outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Clinical characteristics in 53 patients with cat scratch optic neuropathy.

    PubMed

    Chi, Sulene L; Stinnett, Sandra; Eggenberger, Eric; Foroozan, Rod; Golnik, Karl; Lee, Michael S; Bhatti, M Tariq

    2012-01-01

    To describe the clinical manifestations and to identify risk factors associated with visual outcome in a large cohort of patients with cat scratch optic neuropathy (CSON). Multicenter, retrospective chart review. Fifty-three patients (62 eyes) with serologically positive CSON from 5 academic neuro-ophthalmology services evaluated over an 11-year period. Institutional review board/ethics committee approval was obtained. Data from medical record charts were collected to detail the clinical manifestations and to analyze visual outcome metrics. Generalized estimating equations and logistic regression analysis were used in the statistical analysis. Six patients (9 eyes) were excluded from visual outcome statistical analysis because of a lack of follow-up. Demographic information, symptoms at presentation, clinical characteristics, length of follow-up, treatment used, and visual acuity (at presentation and final follow-up). Mean patient age was 27.8 years (range, 8-65 years). Mean follow-up time was 170.8 days (range, 1-1482 days). Simultaneous bilateral involvement occurred in 9 (17%) of 53 patients. Visual acuity on presentation ranged from 20/20 to counting fingers (mean, 20/160). Sixty-eight percent of eyes retained a visual acuity of 20/40 or better at final follow-up (defined as favorable visual outcome). Sixty-seven percent of patients endorsed a history of cat or kitten scratch. Neuroretinitis (macular star) developed in 28 eyes (45%). Only 5 patients had significant visual complications (branch retinal artery occlusion, macular hole, and corneal decompensation). Neither patient age nor any other factor except good initial visual acuity and absence of systemic symptoms was associated with a favorable visual outcome. There was no association between visual acuity at final follow-up and systemic antibiotic or steroid use. Patients with CSON have a good overall visual prognosis. Good visual acuity at presentation was associated with a favorable visual outcome. The absence of a macular star does not exclude the possibility of CSON. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  5. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  6. Profiling Oman education data using data visualization technique

    NASA Astrophysics Data System (ADS)

    Alalawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd

    2016-10-01

    This research works presents an innovative data visualization technique to understand and visualize the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". The Ministry of Education in Sultanate of Oman have huge databases contains massive information. The volume of data in the database increase yearly as many students, teachers and employees enter into the database. The task for discovering and analyzing these vast volumes of data becomes increasingly difficult. Information visualization and data mining offer a better ways in dealing with large volume of information. In this paper, an innovative information visualization technique is developed to visualize the complex multidimensional educational data. Microsoft Excel Dashboard, Visual Basic Application (VBA) and Pivot Table are utilized to visualize the data. Findings from the summarization of the data are presented, and it is argued that information visualization can help related stakeholders to become aware of hidden and interesting information from large amount of data drowning in their educational portal.

  7. Visual impairment and traits of autism in children.

    PubMed

    Wrzesińska, Magdalena; Kapias, Joanna; Nowakowska-Domagała, Katarzyna; Kocur, Józef

    2017-04-30

    Visual impairment present from birth or from an early childhood may lead to psychosocial and emotional disorders. 11-40% of children in the group with visual impairment show traits of autism. The aim of this paper was to present the selected examples of how visual impairment in children is related to the occurrence of autism and to describe the available tools for diagnosing autism in children with visual impairment. So far the relation between visual impairment in children and autism has not been sufficiently confirmed. Psychiatric and psychological diagnosis of children with visual impairment has some difficulties in differentiating between "blindism" and traits typical for autism resulting from a lack of standardized diagnostic tools used to diagnosing children with visual impairment. Another difficulty in diagnosing autism in children with visual impairment is the coexistence of other disabilities in case of most children with vision impairment. Additionally, apart from difficulties in diagnosing autistic disorders in children with eye dysfunctions there is also a question of what tools should be used in therapy and rehabilitation of patients.

  8. The use of visual cues in gravity judgements on parabolic motion.

    PubMed

    Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan

    2018-06-21

    Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.

  9. Verifying visual properties in sentence verification facilitates picture recognition memory.

    PubMed

    Pecher, Diane; Zanolie, Kiki; Zeelenberg, René

    2007-01-01

    According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. We investigated whether recognition memory for pictures of concepts was facilitated by earlier representation of visual properties of those concepts. During study, concept names (e.g., apple) were presented in a property verification task with a visual property (e.g., shiny) or with a nonvisual property (e.g., tart). Delayed picture recognition memory was better if the concept name had been presented with a visual property than if it had been presented with a nonvisual property. These results indicate that modality-specific simulations are used for concept representation.

  10. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.

    PubMed

    van Hoesel, Richard J M

    2015-04-01

    One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.

  12. Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Hack, Zarita Caplan; Erber, Norman P.

    1982-01-01

    Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…

  13. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  14. Helping Children with Visual and Motor Impairments Make the Most of Their Visual Abilities.

    ERIC Educational Resources Information Center

    Amerson, Marie J.

    1999-01-01

    Lists strategies for promoting functional vision use in children with visual and motor impairments, including providing postural stability, presenting visual attention tasks when energy level is the highest, using a slanted work surface, placing target items in varied locations within reach, and determining the most effective visual adaptations.…

  15. Qualitative Differences in the Representation of Abstract versus Concrete Words: Evidence from the Visual-World Paradigm

    ERIC Educational Resources Information Center

    Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel

    2009-01-01

    In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…

  16. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  17. Ingredients to Successful Students Presentations: It's More Than Just a Sum of Raw Materials.

    ERIC Educational Resources Information Center

    Kerns, H. Dan; Johnson, Nial

    Recognizing the decline in student visual communication skills, faculty from different disciplines collaborated in the design of a visual literacy course. The visual literacy skills developed in the course are that students learn in the following ways: (1) through faculty presentation and demonstration of the various tools available; (2) with…

  18. Effect of microgravity on visual contrast threshold during STS Shuttle missions: Visual Function Tester-Model 2 (VFT-2)

    NASA Technical Reports Server (NTRS)

    Oneal, Melvin R.; Task, H. Lee; Genco, Louis V.

    1992-01-01

    Viewgraphs on effect of microgravity on visual contrast threshold during STS shuttle missions are presented. The purpose, methods, and results are discussed. The visual function tester model 2 is used.

  19. The prevalence and causes of visual impairment in seven-year-old children.

    PubMed

    Ghaderi, Soraya; Hashemi, Hassan; Jafarzadehpur, Ebrahim; Yekta, Abbasali; Ostadimoghaddam, Hadi; Mirzajani, Ali; Khabazkhoob, Mehdi

    2018-05-01

    To report the prevalence and causes of visual impairment in seven-year-old children in Iran and its relationship with socio-economic conditions. In a cross-sectional population-based study, first-grade students in the primary schools of eight cities in the country were randomly selected from different geographic locations using multistage cluster sampling. The examinations included visual acuity measurement, ocular motility evaluation, and cycloplegic and non-cycloplegic refraction. Using the definitions of the World Health Organization (presenting visual acuity less than or equal to 6/18 in the better eye) to estimate the prevalence of vision impairment, the present study reported presenting visual impairment in seven-year-old children. Of 4,614 selected students, 4,106 students participated in the study (response rate 89 per cent), of whom 2,127 (51.8 per cent) were male. The prevalence of visual impairment according to a visual acuity of 6/18 was 0.341 per cent (95 per cent confidence interval 0.187-0.571); 1.34 per cent (95 per cent confidence interval 1.011-1.74) of children had visual impairment according to a visual acuity of 6/18 in at least one eye. Sixty-six (1.6 per cent) and 23 (0.24 per cent) children had visual impairment according to a visual acuity of 6/12 in the worse and better eye, respectively. The most common causes of visual impairment were refractive errors (81.8 per cent) and amblyopia (14.5 per cent). Among different types of refractive errors, astigmatism was the main refractive error leading to visual impairment. According to the concentration index, the distribution of visual impairment in children from low-income families was higher. This study revealed a high prevalence of visual impairment in a representative sample of seven-year-old Iranian children. Astigmatism and amblyopia were the most common causes of visual impairment. The distribution of visual impairment was higher in children from low-income families. Cost-effective strategies are needed to address these easily treatable causes of visual impairment. © 2017 Optometry Australia.

  20. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    PubMed

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  1. Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097

  2. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.

  3. Radical “Visual Capture” Observed in a Patient with Severe Visual Agnosia

    PubMed Central

    Takaiwa, Akiko; Yoshimura, Hirokazu; Abe, Hirofumi; Terai, Satoshi

    2003-01-01

    We report the case of a 79-year-old female with visual agnosia due to brain infarction in the left posterior cerebral artery. She could recognize objects used in daily life rather well by touch (the number of objects correctly identified was 16 out of 20 presented objects), but she could not recognize them as well by vision (6 out of 20). In this case, it was expected that she would recognize them well when permitted to use touch and vision simultaneously. Our patient, however, performed poorly, producing 5 correct answers out of 20 in the Vision-and-Touch condition. It would be natural to think that visual capture functions when vision and touch provide contradictory information on concrete positions and shapes. However, in the present case, it functioned in spite of the visual deficit in recognizing objects. This should be called radical visual capture. By presenting detailed descriptions of her symptoms and neuropsychological and neuroradiological data, we clarify the characteristics of this type of capture. PMID:12719638

  4. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Visual Analytic Judgments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik

    2017-05-08

    Scientists often use specific data analysis and presentation methods familiar within their domain. But does high familiarity drive better analytical judgment? This question is especially relevant when familiar methods themselves can have shortcomings: many visualizations used conventionally for scientific data analysis and presentation do not follow established best practices. This necessitates new methods that might be unfamiliar yet prove to be more effective. But there is little empirical understanding of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their visual analytic judgments. To address this gap and to study these factors, we focusmore » on visualizations used for comparison of climate model performance. We report on a comprehensive survey-based user study with 47 climate scientists and present an analysis of : i) relationships among scientists’ familiarity, their perceived lev- els of comfort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less

  5. Improvement of visual acuity by refraction in a low-vision population.

    PubMed

    Sunness, Janet S; El Annan, Jaafar

    2010-07-01

    Refraction often may be overlooked in low-vision patients, because the main cause of vision decrease is not refractive, but rather is the result of underlying ocular disease. This retrospective study was carried out to determine how frequently and to what extent visual acuity is improved by refraction in a low-vision population. Cross-sectional study. Seven hundred thirty-nine low-vision patients seen for the first time. A database with all new low-vision patients seen from November 2005 through June 2008 recorded presenting visual acuity using an Early Treatment Diabetic Retinopathy Study chart; it also recorded the best-corrected visual acuity (BCVA) if it was 2 lines or more better than the presenting visual acuity. Retinoscopy was carried out on all patients, followed by manifest refraction. Improvement in visual acuity. Median presenting acuity was 20/80(-2) (interquartile range, 20/50-20/200). There was an improvement of 2 lines or more of visual acuity in 81 patients (11% of all patients), with 22 patients (3% of all patients) improving by 4 lines or more. There was no significant difference in age or in presenting visual acuity between the group that did not improve by refraction and the group that did improve. When stratified by diagnosis, the only 2 diagnoses with a significantly higher rate of improvement than the age-related macular degeneration group were myopic degeneration and progressive myopia (odds ratio, 4.8; 95% confidence interval [CI], 3.0-6.7) and status post-retinal detachment (odds ratio, 7.1; 95% CI, 5.2-9.0). For 5 patients (6% of those with improvement), the eye that was 1 line or more worse than the fellow eye at presentation became the eye that was 1 line or more better than the fellow eye after refraction. A significant improvement in visual acuity was attained by refraction in 11% of the new low-vision patients. Improvement was seen across diagnoses and the range of presenting visual acuity. The worse-seeing eye at presentation may become the better-seeing eye after refraction, so that the eye behind a balance lens should be refracted as well. Proprietary or commercial disclosure may be found after the references. Copyright 2010 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  6. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.

  7. Effect of Microgravity on Several Visual Functions During STS Shuttle Missions: Visual Function Tester-model 1 (VFT-1)

    NASA Technical Reports Server (NTRS)

    Oneal, Melvin R.; Task, H. Lee; Genco, Louis V.

    1992-01-01

    Viewgraphs on the effect of microgravity on several visual functions during STS shuttle missions are presented. The purpose, methods, results, and discussion are discussed. The visual function tester model 1 is used.

  8. The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2015-01-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774

  9. The effects of varying contextual demands on age-related positive gaze preferences.

    PubMed

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  10. DVA as a Diagnostic Test for Vestibulo-Ocular Reflex Function

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Appelbaum, Meghan

    2010-01-01

    The vestibulo-ocular reflex (VOR) stabilizes vision on earth-fixed targets by eliciting eyes movements in response to changes in head position. How well the eyes perform this task can be functionally measured by the dynamic visual acuity (DVA) test. We designed a passive, horizontal DVA test to specifically study the acuity and reaction time when looking in different target locations. Visual acuity was compared among 12 subjects using a standard Landolt C wall chart, a computerized static (no rotation) acuity test and dynamic acuity test while oscillating at 0.8 Hz (+/-60 deg/s). In addition, five trials with yaw oscillation randomly presented a visual target in one of nine different locations with the size and presentation duration of the visual target varying across trials. The results showed a significant difference between the static and dynamic threshold acuities as well as a significant difference between the visual targets presented in the horizontal plane versus those in the vertical plane when comparing accuracy of vision and reaction time of the response. Visual acuity increased proportional to the size of the visual target and increased between 150 and 300 msec duration. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of rotation. This DVA test could be used as a functional diagnostic test for visual-vestibular and neuro-cognitive impairments by assessing both accuracy and reaction time to acquire visual targets.

  11. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  12. Frequency-band signatures of visual responses to naturalistic input in ferret primary visual cortex during free viewing.

    PubMed

    Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio

    2015-02-19

    Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. [Ventriloquism and audio-visual integration of voice and face].

    PubMed

    Yokosawa, Kazuhiko; Kanaya, Shoko

    2012-07-01

    Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.

  14. Auditory emotional cues enhance visual perception.

    PubMed

    Zeelenberg, René; Bocanegra, Bruno R

    2010-04-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.

  15. Supporting Visual Literacy in the School Library Media Center: Developmental, Socio-Cultural, and Experiential Considerations and Scenarios

    ERIC Educational Resources Information Center

    Cooper, Linda Z.

    2008-01-01

    Children are natural visual learners--they have been absorbing information visually since birth. They welcome opportunities to learn via images as well as to generate visual information themselves, and these opportunities present themselves every day. The importance of visual literacy can be conveyed through conversations and the teachable moment,…

  16. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  17. Brief Report: Vision in Children with Autism Spectrum Disorder: What Should Clinicians Expect?

    ERIC Educational Resources Information Center

    Anketell, Pamela M.; Saunders, Kathryn J.; Gallagher, Stephen M.; Bailey, Clare; Little, Julie-Anne

    2015-01-01

    Anomalous visual processing has been described in individuals with autism spectrum disorder (ASD) but relatively few studies have profiled visual acuity (VA) in this population. The present study describes presenting VA in children with ASD (n = 113) compared to typically developing controls (n = 206) and best corrected visual acuity (BCVA) in a…

  18. Effect of Visual Field Presentation on Action Planning (Estimating Reach) in Children

    ERIC Educational Resources Information Center

    Gabbard, Carl; Cordova, Alberto

    2012-01-01

    In this article, the authors examined the effects of target information presented in different visual fields (lower, upper, central) on estimates of reach via use of motor imagery in children (5-11 years old) and young adults. Results indicated an advantage for estimating reach movements for targets placed in lower visual field (LoVF), with all…

  19. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    ERIC Educational Resources Information Center

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  20. Evaluation of a visual risk communication tool: effects on knowledge and perception of blood transfusion risk.

    PubMed

    Lee, D H; Mehta, M D

    2003-06-01

    Effective risk communication in transfusion medicine is important for health-care consumers, but understanding the numerical magnitude of risks can be difficult. The objective of this study was to determine the effect of a visual risk communication tool on the knowledge and perception of transfusion risk. Laypeople were randomly assigned to receive transfusion risk information with either a written or a visual presentation format for communicating and comparing the probabilities of transfusion risks relative to other hazards. Knowledge of transfusion risk was ascertained with a multiple-choice quiz and risk perception was ascertained by psychometric scaling and principal components analysis. Two-hundred subjects were recruited and randomly assigned. Risk communication with both written and visual presentation formats increased knowledge of transfusion risk and decreased the perceived dread and severity of transfusion risk. Neither format changed the perceived knowledge and control of transfusion risk, nor the perceived benefit of transfusion. No differences in knowledge or risk perception outcomes were detected between the groups randomly assigned to written or visual presentation formats. Risk communication that incorporates risk comparisons in either written or visual presentation formats can improve knowledge and reduce the perception of transfusion risk in laypeople.

  1. Effects of visual span on reading speed and parafoveal processing in eye movements during sentence reading.

    PubMed

    Risse, Sarah

    2014-07-15

    The visual span (or ‘‘uncrowded window’’), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers’ VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. © 2014 ARVO.

  2. Light Video Game Play is Associated with Enhanced Visual Processing of Rapid Serial Visual Presentation Targets.

    PubMed

    Howard, Christina J; Wilding, Robert; Guest, Duncan

    2017-02-01

    There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.

  3. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  4. View-Dependent Streamline Deformation and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Xin; Edwards, John; Chen, Chun-Ming

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual cluttering for visualizing 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures.more » Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.« less

  5. Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm

    PubMed Central

    Kukona, Anuenue; Tabor, Whitney

    2011-01-01

    The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355

  6. Evaluation of Visualization Software

    NASA Technical Reports Server (NTRS)

    Globus, Al; Uselton, Sam

    1995-01-01

    Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.

  7. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE PAGES

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...

    2018-02-14

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  8. Helmet-mounted display systems for flight simulation

    NASA Technical Reports Server (NTRS)

    Haworth, Loren A.; Bucher, Nancy M.

    1989-01-01

    Simulation scientists are continually improving simulation technology with the goal of more closely replicating the physical environment of the real world. The presentation or display of visual information is one area in which recent technical improvements have been made that are fundamental to conducting simulated operations close to the terrain. Detailed and appropriate visual information is especially critical for nap-of-the-earth helicopter flight simulation where the pilot maintains an 'eyes-out' orientation to avoid obstructions and terrain. This paper describes visually coupled wide field of view helmet-mounted display (WFOVHMD) system technology as a viable visual presentation system for helicopter simulation. Tradeoffs associated with this mode of presentation as well as research and training applications are discussed.

  9. The ophthalmic natural history of paediatric craniopharyngioma: a long-term review.

    PubMed

    Drimtzias, Evangelos; Falzon, Kevin; Picton, Susan; Jeeva, Irfan; Guy, Danielle; Nelson, Olwyn; Simmons, Ian

    2014-12-01

    We present our experience over the long-term of monitoring of visual function in children with craniopharyngioma. Our study involves an analysis of all paediatric patients with craniopharyngioma younger than 16 at the time of diagnosis and represents a series of predominantly sub-totally resected tumours. Visual data, of multiple modality, of the paediatric patients was collected. Twenty patients were surveyed. Poor prognostic indicators of the visual outcome and rate of recurrence were assessed. Severe visual loss and papilledema at the time of diagnosis were more common in children under the age of 6. In our study visual signs, tumour calcification and optic disc atrophy at presentation are predictors of poor visual outcome with the first two applying only in children younger than 6. In contrast with previous reports, preoperative visual field (VF) defects and type of surgery were not documented as prognostic indicators of poor postoperative visual acuity (VA) and VF. Contrary to previous reports calcification at diagnosis, type of surgery and preoperative VF defects were not found to be associated with tumour recurrence. Local recurrence is common. Younger age at presentation is associated with a tendency to recur. Magnetic resonance imaging (MRI) remains the recommended means of follow-up in patients with craniopharyngioma.

  10. Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2014-05-01

    Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.

  11. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  12. Visual speech perception in foveal and extrafoveal vision: further implications for divisions in hemispheric projections.

    PubMed

    Jordan, Timothy R; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B

    2014-01-01

    When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.

  13. Serial and semantic encoding of lists of words in schizophrenia patients with visual hallucinations.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2011-03-30

    Previous research has suggested that visual hallucinations in schizophrenia are associated with abnormal salience of visual mental images. Since visual imagery is used as a mnemonic strategy to learn lists of words, increased visual imagery might impede the other commonly used strategies of serial and semantic encoding. We had previously published data on the serial and semantic strategies implemented by patients when learning lists of concrete words with different levels of semantic organisation (Brébion et al., 2004). In this paper we present a re-analysis of these data, aiming at investigating the associations between learning strategies and visual hallucinations. Results show that the patients with visual hallucinations presented less serial clustering in the non-organisable list than the other patients. In the semantically organisable list with typical instances, they presented both less serial and less semantic clustering than the other patients. Thus, patients with visual hallucinations demonstrate reduced use of serial and semantic encoding in the lists made up of fairly familiar concrete words, which enable the formation of mental images. Although these results are preliminary, we propose that this different processing of the lists stems from the abnormal salience of the mental images such patients experience from the word stimuli. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear

    PubMed Central

    Clevis, Krien; Hagoort, Peter

    2011-01-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540

  15. Invisible Mars: New Visuals for Communicating MAVEN's Story

    NASA Astrophysics Data System (ADS)

    Shupla, C. B.; Ali, N. A.; Jones, A. P.; Mason, T.; Schneider, N. M.; Brain, D. A.; Blackwell, J.

    2016-12-01

    Invisible Mars tells the story of Mars' evolving atmosphere, through a script and a series of visuals as a live presentation. Created for Science-On-A-Sphere, the presentation has also been made available to planetariums, and is being expanded to other platforms. The script has been updated to include results from the Mars Atmosphere and Volatile Evolution Mission (MAVEN), and additional visuals have been produced. This poster will share the current Invisible Mars resources available and the plans to further disseminate this presentation.

  16. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  17. Visualization of regulations to support design and quality control--a long-term study.

    PubMed

    Blomé, Mikael

    2012-01-01

    The aim of the study was to visualize design regulations of furniture by means of interactive technology based on earlier studies and practical examples. The usage of the visualized regulations was evaluated on two occasions: at the start when the first set of regulations was presented, and after six years of usage of all regulations. The visualized regulations were the result of a design process involving experts and potential users in collaboration with IKEA of Sweden AB. The evaluations by the different users showed a very positive response to using visualized regulations. The participative approach, combining expertise in specific regulations with visualization of guidelines, resulted in clear presentations of important regulations, and great attitudes among the users. These kinds of visualizations have proved to be applicable in a variety of product areas at IKEA, with a potential for further dissemination. It is likely that the approaches to design and visualized regulations in this case study could function in other branches.

  18. Fragile visual short-term memory is an object-based and location-specific store.

    PubMed

    Pinto, Yaïr; Sligte, Ilja G; Shapiro, Kimron L; Lamme, Victor A F

    2013-08-01

    Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to unveil the functional underpinnings of this memory storage. We found that FM is only completely erased when the new visual scene appears at the same location and consists of the same objects as the to-be-recalled information. This result has two important implications: First, it shows that FM is an object- and location-specific store, and second, it suggests that FM might be used in everyday life when the presentation of visual information is appropriately designed.

  19. The schemes and methods for producing of the visual security features used in the color hologram stereography

    NASA Astrophysics Data System (ADS)

    Lushnikov, D. S.; Zherdev, A. Y.; Odinokov, S. B.; Markin, V. V.; Smirnov, A. V.

    2017-05-01

    Visual security elements used in color holographic stereograms - three-dimensional colored security holograms - and methods their production is describes in this article. These visual security elements include color micro text, color-hidden image, the horizontal and vertical flip - flop effects by change color and image. The article also presents variants of optical systems that allow record the visual security elements as part of the holographic stereograms. The methods for solving of the optical problems arising in the recording visual security elements are presented. Also noted perception features of visual security elements for verification of security holograms by using these elements. The work was partially funded under the Agreement with the RF Ministry of Education and Science № 14.577.21.0197, grant RFMEFI57715X0197.

  20. Effects of visual attention on chromatic and achromatic detection sensitivities.

    PubMed

    Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko

    2014-05-01

    Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.

  1. Default Mode Network (DMN) Deactivation during Odor-Visual Association

    PubMed Central

    Karunanayaka, Prasanna R.; Wilson, Donald A.; Tobia, Michael J.; Martinez, Brittany; Meadowcroft, Mark; Eslinger, Paul J.; Yang, Qing X.

    2017-01-01

    Default mode network (DMN) deactivation has been shown to be functionally relevant for goal-directed cognition. In this study, we investigated the DMN’s role during olfactory processing using two complementary functional magnetic resonance imaging (fMRI) paradigms with identical timing, visual-cue stimulation and response monitoring protocols. Twenty-nine healthy, non-smoking, right-handed adults (mean age = 26±4 yrs., 16 females) completed an odor-visual association fMRI paradigm that had two alternating odor+visual and visual-only trial conditions. During odor+visual trials, a visual cue was presented simultaneously with an odor, while during visual-only trial conditions the same visual cue was presented alone. Eighteen of the 29 participants (mean age = 27.0 ± 6.0 yrs.,11 females) also took part in a control no-odor fMRI paradigm that consisted of visual-only trial conditions which were identical to the visual-only trials in the odor-visual association paradigm. We used Independent Component Analysis (ICA), extended unified structural equation modeling (euSEM), and psychophysiological interaction (PPI) to investigate the interplay between the DMN and olfactory network. In the odor-visual association paradigm, DMN deactivation was evoked by both the odor+visual and visual-only trial conditions. In contrast, the visual-only trials in the no-odor paradigm did not evoke consistent DMN deactivation. In the odor-visual association paradigm, the euSEM and PPI analyses identified a directed connectivity between the DMN and olfactory network which was significantly different between odor+visual and visual-only trial conditions. The results support a strong interaction between the DMN and olfactory network and highlights DMN’s role in task-evoked brain activity and behavioral responses during olfactory processing. PMID:27785847

  2. Contingent capture of involuntary visual spatial attention does not differ between normally hearing children and proficient cochlear implant users.

    PubMed

    Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill

    2014-01-01

    Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.

  3. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  4. Does Seeing Ice Really Feel Cold? Visual-Thermal Interaction under an Illusory Body-Ownership

    PubMed Central

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed. PMID:23144814

  5. Does seeing ice really feel cold? Visual-thermal interaction under an illusory body-ownership.

    PubMed

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed.

  6. Mobile device geo-localization and object visualization in sensor networks

    NASA Astrophysics Data System (ADS)

    Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.

  7. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  8. Purtscher's retinopathy associated with acute pancreatitis.

    PubMed

    Hamp, Ania M; Chu, Edward; Slagle, William S; Hamp, Robert C; Joy, Jeffrey T; Morris, Robert W

    2014-02-01

    Purtscher's retinopathy is a rare condition that is associated with complement-activating systemic diseases such as acute pancreatitis. After pancreatic injury or inflammation, proteases such as trypsin activate the complement system and can potentially cause coagulation and leukoembolization of retinal precapillary arterioles. Specifically, intermediate-sized emboli are sufficiently small enough to pass through larger arteries yet large enough to remain lodged in precapillary arterioles and cause the clinical appearance of Purtscher's retinopathy. This pathology may present with optic nerve edema, impaired visual acuity, visual field loss, as well as retinal findings such as cotton-wool spots, retinal hemorrhage, artery attenuation, venous dilation, and Purtscher flecken. A 57-year-old white man presented with an acute onset of visual field scotomas and decreased visual acuity 1 week after being hospitalized for acute pancreatitis. The retinal examination revealed multiple regions of discrete retinal whitening surrounding the disk, extending through the macula bilaterally, as well as bilateral optic nerve hemorrhages. The patient identified paracentral bilateral visual field defects on Amsler Grid testing, which was confirmed with subsequent Humphrey visual field analysis. Although the patient presented with an atypical underlying etiology, he exhibited classic retinal findings for Purtscher's retinopathy. After 2 months, best corrected visual acuity improved and the retinal whitening was nearly resolved; however, bilateral paracentral visual field defects remained. Purtscher's retinopathy has a distinctive clinical presentation and is typically associated with thoracic trauma but may be a sequela of nontraumatic systemic disease such as acute pancreatitis. Patients diagnosed with acute pancreatitis should have an eye examination to rule out Purtscher's retinopathy. Although visual improvement is possible, patients should be educated that there may be permanent ocular sequelae.

  9. Visual Working Memory Is Independent of the Cortical Spacing Between Memoranda.

    PubMed

    Harrison, William J; Bays, Paul M

    2018-03-21

    The sensory recruitment hypothesis states that visual short-term memory is maintained in the same visual cortical areas that initially encode a stimulus' features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short-term memory is similarly constrained by the cortical spacing of memory items. Here, we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, presenting memoranda in peripheral vision sequentially along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we presented memoranda sequentially either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behavior of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas such as posterior parietal or prefrontal regions or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. SIGNIFICANCE STATEMENT Although much is known about the resolution with which we can remember visual objects, the cortical representation of items held in short-term memory remains contentious. A popular hypothesis suggests that memory of visual features is maintained via the recruitment of the same neural architecture in sensory cortex that encodes stimuli. We investigated this claim by manipulating the spacing in visual cortex between sequentially presented memoranda such that some items shared cortical representations more than others while preventing perceptual interference between stimuli. We found clear evidence that short-term memory is independent of the intracortical spacing of memoranda, revealing a dissociation between perceptual and memory representations. Our data indicate that working memory relies on different neural mechanisms from sensory perception. Copyright © 2018 Harrison and Bays.

  10. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  11. A sLORETA study for gaze-independent BCI speller.

    PubMed

    Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming

    2017-07-01

    EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.

  12. Beyond Ball-and-Stick: Students' Processing of Novel STEM Visualizations

    ERIC Educational Resources Information Center

    Hinze, Scott R.; Rapp, David N.; Williamson, Vickie M.; Shultz, Mary Jane; Deslongchamps, Ghislain; Williamson, Kenneth C.

    2013-01-01

    Students are frequently presented with novel visualizations introducing scientific concepts and processes normally unobservable to the naked eye. Despite being unfamiliar, students are expected to understand and employ the visualizations to solve problems. Domain experts exhibit more competency than novices when using complex visualizations, but…

  13. The effect of visual and verbal modes of presentation on children's retention of images and words

    NASA Astrophysics Data System (ADS)

    Vasu, Ellen Storey; Howe, Ann C.

    This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.

  14. Computationally Efficient Clustering of Audio-Visual Meeting Data

    NASA Astrophysics Data System (ADS)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  15. Right hemispheric dominance and interhemispheric cooperation in gaze-triggered reflexive shift of attention.

    PubMed

    Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya

    2012-03-01

    The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.

  16. Infants' Visual Localization of Visual and Auditory Targets.

    ERIC Educational Resources Information Center

    Bechtold, A. Gordon; And Others

    This study is an investigation of 2-month-old infants' abilities to visually localize visual and auditory peripheral stimuli. Each subject (N=40) was presented with 50 trials; 25 of these visual and 25 auditory. The infant was placed in a semi-upright infant seat positioned 122 cm from the center speaker of an arc formed by five loudspeakers. At…

  17. Does Differential Visual Exploration Contribute to Visual Memory Impairments in 22Q11.2 Microdeletion Syndrome?

    ERIC Educational Resources Information Center

    Bostelmann, M.; Glaser, B.; Zaharia, A.; Eliez, S.; Schneider, M.

    2017-01-01

    Background: Chromosome 22q11.2 microdeletion syndrome (22q11.2DS) is a genetic syndrome characterised by a unique cognitive profile. Individuals with the syndrome present several non-verbal deficits, including visual memory impairments and atypical exploration of visual information. In this study, we seek to understand how visual attention may…

  18. Design of smart home sensor visualizations for older adults.

    PubMed

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-01-01

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date. Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  19. Design of smart home sensor visualizations for older adults.

    PubMed

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-07-24

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date.CONCLUSIONS: Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  20. Visual body size norms and the under‐detection of overweight and obesity

    PubMed Central

    Robinson, E.

    2017-01-01

    Summary Objectives The weight status of men with overweight and obesity tends to be visually underestimated, but visual recognition of female overweight and obesity has not been formally examined. The aims of the present studies were to test whether people can accurately recognize both male and female overweight and obesity and to examine a visual norm‐based explanation for why weight status is underestimated. Methods The present studies examine whether both male and female overweight and obesity are visually underestimated (Study 1), whether body size norms predict when underestimation of weight status occurs (Study 2) and whether visual exposure to heavier body weights adjusts visual body size norms and results in underestimation of weight status (Study 3). Results The weight status of men and women with overweight and obesity was consistently visually underestimated (Study 1). Body size norms predicted underestimation of weight status (Study 2) and in part explained why visual exposure to heavier body weights caused underestimation of overweight (Study 3). Conclusions The under‐detection of overweight and obesity may have been in part caused by exposure to larger body sizes resulting in an upwards shift in the range of body sizes that are perceived as being visually ‘normal’. PMID:29479462

  1. Creating a Visualization Powerwall

    NASA Technical Reports Server (NTRS)

    Miller, B. H.; Lambert, J.; Zamora, K.

    1996-01-01

    From Introduction: This paper presents the issues of constructing a Visualization Powerwall. For each hardware component, the requirements, options an our solution are presented. This is followed by a short description of each pilot project. In the summary, current obstacles and options discovered along the way are presented.

  2. Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.

    PubMed

    Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi

    2013-01-01

    The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.

  3. Brain plasticity in the adult: modulation of function in amblyopia with rTMS.

    PubMed

    Thompson, Benjamin; Mansouri, Behzad; Koski, Lisa; Hess, Robert F

    2008-07-22

    Amblyopia is a cortically based visual disorder caused by disruption of vision during a critical early developmental period. It is often thought to be a largely intractable problem in adult patients because of a lack of neuronal plasticity after this critical period [1]; however, recent advances have suggested that plasticity is still present in the adult amblyopic visual cortex [2-6]. Here, we present data showing that repetitive transcranial magnetic stimulation (rTMS) of the visual cortex can temporarily improve contrast sensitivity in the amblyopic visual cortex. The results indicate continued plasticity of the amblyopic visual system in adulthood and open the way for a potential new therapeutic approach to the treatment of amblyopia.

  4. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  5. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus

    PubMed Central

    2017-01-01

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553

  6. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    PubMed

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.

  7. Astronomy for the Blind and Visually Impaired

    NASA Astrophysics Data System (ADS)

    Kraus, S.

    2016-12-01

    This article presents a number of ways of communicating astronomy topics, ranging from classical astronomy to modern astrophysics, to the blind and visually impaired. A major aim of these projects is to provide access which goes beyond the use of the tactile sense to improve knowledge transfer for blind and visually impaired students. The models presented here are especially suitable for young people of secondary school age.

  8. A STUDY OF THE EFFECTS OF PRESENTING INFORMATIVE SPEECHES WITH AND WITHOUT THE USE OF VISUAL AIDS TO VOLUNTARY ADULT AUDIENCES.

    ERIC Educational Resources Information Center

    BODENHAMER, SCHELL H.

    TO DETERMINE THE COMPARATIVE AMOUNT OF LEARNING THAT OCCURRED AND THE AUDIENCE REACTION TO MEETING EFFECTIVENESS, A 20-MINUTE INFORMATIVE SPEECH, "THE WEATHER," WAS PRESENTED WITH VISUAL AIDS TO 23 AND WITHOUT VISUAL AIDS TO 23 INFORMAL, VOLUNTARY, ADULT AUDIENCES. THE AUDIENCES WERE RANDOMLY DIVIDED, AND CONTROLS WERE USED TO ASSURE IDENTICAL…

  9. Implications of differences of echoic and iconic memory for the design of multimodal displays

    NASA Astrophysics Data System (ADS)

    Glaser, Daniel Shields

    It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.

  10. Interference, aging, and visuospatial working memory: the role of similarity.

    PubMed

    Rowe, Gillian; Hasher, Lynn; Turcotte, Josée

    2010-11-01

    Older adults' performance on working memory (WM) span tasks is known to be negatively affected by the buildup of proactive interference (PI) across trials. PI has been reduced in verbal tasks and performance increased by presenting distinctive items across trials. In addition, reversing the order of trial presentation (i.e., starting with the longest sets first) has been shown to reduce PI in both verbal and visuospatial WM span tasks. We considered whether making each trial visually distinct would improve older adults' visuospatial WM performance, and whether combining the 2 PI-reducing manipulations, distinct trials and reversed order of presentation, would prove additive, thus providing even greater benefit. Forty-eight healthy older adults (age range = 60-77 years) completed 1 of 3 versions of a computerized Corsi block test. For 2 versions of the task, trials were either all visually similar or all visually distinct, and were presented in the standard ascending format (shortest set size first). In the third version, visually distinct trials were presented in a reverse order of presentation (longest set size first). Span scores were reliably higher in the ascending version for visually distinct compared with visually similar trials, F(1, 30) = 4.96, p = .03, η² = .14. However, combining distinct trials and a descending format proved no more beneficial than administering the descending format alone. Our findings suggest that a more accurate measurement of the visuospatial WM span scores of older adults (and possibly neuropsychological patients) might be obtained by reducing within-test interference.

  11. Designing Instructional Visuals; Theory, Composition, Implementation.

    ERIC Educational Resources Information Center

    Linker, Jerry Mac

    The use of visual media in the classroom contributes to the improvement of teaching and learning. The purpose of this handbook is to present a practical discussion of the principles involved in designing visuals that teach. The author first describes the essentials of communication applied to instructional visuals. He then analyzes the physical…

  12. Auditory and Visual Capture during Focused Visual Attention

    ERIC Educational Resources Information Center

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets…

  13. Optimizing Visually-Assisted Listening Comprehension

    ERIC Educational Resources Information Center

    Kashani, Ahmad Sabouri; Sajjadi, Samad; Sohrabi, Mohammad Reza; Younespour, Shima

    2011-01-01

    The fact that visual aids such as pictures or graphs can lead to greater comprehension by language learners has been well established. Nonetheless, the order of presenting visuals to listeners is left unattended. This study examined listening comprehension from a strategy of introducing visual information, either prior to or during an audio…

  14. Visual Scripting.

    ERIC Educational Resources Information Center

    Halas, John

    Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…

  15. Newsmagazine Visuals and the 1988 Presidential Election.

    ERIC Educational Resources Information Center

    Moriarty, Sandra; Popovich, Mark

    A study examined newsmagazines' visual coverage of the 1988 election to determine if patterns of difference in the visual presentation of candidates existed. A content analysis examined all the visuals (photographs and illustrations) of the presidential and vice-presidential candidates printed in three national weekly newsmagazines--"U.S.…

  16. The Ecological Approach to Text Visualization.

    ERIC Educational Resources Information Center

    Wise, James A.

    1999-01-01

    Presents both theoretical and technical bases on which to build a "science of text visualization." The Spatial Paradigm for Information Retrieval and Exploration (SPIRE) text-visualization system, which images information from free-text documents as natural terrains, serves as an example of the "ecological approach" in its visual metaphor, its…

  17. Teaching the Visual Learner: The Use of Visual Summaries in Marketing Education

    ERIC Educational Resources Information Center

    Clarke, Irvine, III.; Flaherty, Theresa B.; Yankey, Michael

    2006-01-01

    Approximately 40% of college students are visual learners, preferring to be taught through pictures, diagrams, flow charts, timelines, films, and demonstrations. Yet marketing instruction remains heavily reliant on presenting content primarily through verbal cues such as written or spoken words. Without visual instruction, some students may be…

  18. A Bilateral Advantage for Storage in Visual Working Memory

    ERIC Educational Resources Information Center

    Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward

    2010-01-01

    Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…

  19. Using Visual Organizers to Enhance EFL Instruction

    ERIC Educational Resources Information Center

    Kang, Shumin

    2004-01-01

    Visual organizers are visual frameworks such as figures, diagrams, charts, etc. used to present structural knowledge spatially in a given area with the intention of enhancing comprehension and learning. Visual organizers are effective in terms of helping to elicit, explain, and communicate information because they can clarify complex concepts into…

  20. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  1. Suggested Activities to Use With Children Who Present Symptoms of Visual Perception Problems, Elementary Level.

    ERIC Educational Resources Information Center

    Washington County Public Schools, Washington, PA.

    Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…

  2. Physical Models that Provide Guidance in Visualization Deconstruction in an Inorganic Context

    ERIC Educational Resources Information Center

    Schiltz, Holly K.; Oliver-Hoyo, Maria T.

    2012-01-01

    Three physical model systems have been developed to help students deconstruct the visualization needed when learning symmetry and group theory. The systems provide students with physical and visual frames of reference to facilitate the complex visualization involved in symmetry concepts. The permanent reflection plane demonstration presents an…

  3. Visual Stress in Adults with and without Dyslexia

    ERIC Educational Resources Information Center

    Singleton, Chris; Trotter, Susannah

    2005-01-01

    The relationship between dyslexia and visual stress (sometimes known as Meares-Irlen syndrome) is uncertain. While some theorists have hypothesised an aetiological link between the two conditions, mediated by the magnocellular visual system, at the present time the predominant theories of dyslexia and visual stress see them as distinct, unrelated…

  4. Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.

    PubMed

    Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno

    2004-01-01

    Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.

  5. Numerosity underestimation with item similarity in dynamic visual display.

    PubMed

    Au, Ricky K C; Watanabe, Katsumi

    2013-01-01

    The estimation of numerosity of a large number of objects in a static visual display is possible even at short durations. Such coarse approximations of numerosity are distinct from subitizing, in which the number of objects can be reported with high precision when a small number of objects are presented simultaneously. The present study examined numerosity estimation of visual objects in dynamic displays and the effect of object similarity on numerosity estimation. In the basic paradigm (Experiment 1), two streams of dots were presented and observers were asked to indicate which of the two streams contained more dots. Streams consisting of dots that were identical in color were judged as containing fewer dots than streams where the dots were different colors. This underestimation effect for identical visual items disappeared when the presentation rate was slower (Experiment 1) or the visual display was static (Experiment 2). In Experiments 3 and 4, in addition to the numerosity judgment task, observers performed an attention-demanding task at fixation. Task difficulty influenced observers' precision in the numerosity judgment task, but the underestimation effect remained evident irrespective of task difficulty. These results suggest that identical or similar visual objects presented in succession might induce substitution among themselves, leading to an illusion that there are few items overall and that exploiting attentional resources does not eliminate the underestimation effect.

  6. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  7. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  8. Teaching Effectively with Visual Effect in an Image-Processing Class.

    ERIC Educational Resources Information Center

    Ng, G. S.

    1997-01-01

    Describes a course teaching the use of computers in emulating human visual capability and image processing and proposes an interactive presentation using multimedia technology to capture and sustain student attention. Describes the three phase presentation: introduction of image processing equipment, presentation of lecture material, and…

  9. From Visual Exploration to Storytelling and Back Again.

    PubMed

    Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M

    2016-06-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).

  10. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  11. From Visual Exploration to Storytelling and Back Again

    PubMed Central

    Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.

    2016-01-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091

  12. Lightness computation by the human visual system

    NASA Astrophysics Data System (ADS)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  13. Auditory and visual capture during focused visual attention.

    PubMed

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-10-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  14. Decoding visual object categories in early somatosensory cortex.

    PubMed

    Smith, Fraser W; Goodale, Melvyn A

    2015-04-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.

  15. Decoding Visual Object Categories in Early Somatosensory Cortex

    PubMed Central

    Smith, Fraser W.; Goodale, Melvyn A.

    2015-01-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136

  16. Sequence Diversity Diagram for comparative analysis of multiple sequence alignments.

    PubMed

    Sakai, Ryo; Aerts, Jan

    2014-01-01

    The sequence logo is a graphical representation of a set of aligned sequences, commonly used to depict conservation of amino acid or nucleotide sequences. Although it effectively communicates the amount of information present at every position, this visual representation falls short when the domain task is to compare between two or more sets of aligned sequences. We present a new visual presentation called a Sequence Diversity Diagram and validate our design choices with a case study. Our software was developed using the open-source program called Processing. It loads multiple sequence alignment FASTA files and a configuration file, which can be modified as needed to change the visualization. The redesigned figure improves on the visual comparison of two or more sets, and it additionally encodes information on sequential position conservation. In our case study of the adenylate kinase lid domain, the Sequence Diversity Diagram reveals unexpected patterns and new insights, for example the identification of subgroups within the protein subfamily. Our future work will integrate this visual encoding into interactive visualization tools to support higher level data exploration tasks.

  17. Cerebral Visual Impairment in Children: A Longitudinal Case Study of Functional Outcomes beyond the Visual Acuities

    ERIC Educational Resources Information Center

    Lam, Fook Chang; Lovett, Fiona; Dutton, Gordon N.

    2010-01-01

    Damage to the areas of the brain that are responsible for higher visual processing can lead to severe cerebral visual impairment (CVI). The prognosis for higher cognitive visual functions in children with CVI is not well described. We therefore present our six-year follow-up of a boy with CVI and highlight intervention approaches that have proved…

  18. Holography: Use in Training and Testing Drivers on the Road in Accident Avoidance.

    ERIC Educational Resources Information Center

    Frey, Allan H.; Frey, Donnalyn

    1979-01-01

    Defines holography, identifies visual factors in driving and the techniques used in on-road visual presentations, and presents the design and testing of a holographic system for driver training. (RAO)

  19. Forever young: Visual representations of gender and age in online dating sites for older adults.

    PubMed

    Gewirtz-Meydan, Ateret; Ayalon, Liat

    2017-06-13

    Online dating has become increasingly popular among older adults following broader social media adoption patterns. The current study examined the visual representations of people on 39 dating sites intended for the older population, with a particular focus on the visualization of the intersection between age and gender. All 39 dating sites for older adults were located through the Google search engine. Visual thematic analysis was performed with reference to general, non-age-related signs (e.g., facial expression, skin color), signs of aging (e.g., perceived age, wrinkles), relational features (e.g., proximity between individuals), and additional features such as number of people presented. The visual analysis in the present study revealed a clear intersection between ageism and sexism in the presentation of older adults. The majority of men and women were smiling and had a fair complexion, with light eye color and perceived age of younger than 60. Older women were presented as younger and wore more cosmetics as compared with older men. The present study stresses the social regulation of sexuality, as only heterosexual couples were presented. The narrow representation of older adults and the anti-aging messages portrayed in the pictures convey that love, intimacy, and sexual activity are for older adults who are "forever young."

  20. Stimulus modality and working memory performance in Greek children with reading disabilities: additional evidence for the pictorial superiority hypothesis.

    PubMed

    Constantinidou, Fofi; Evripidou, Christiana

    2012-01-01

    This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.

  1. Tracking Learners' Visual Attention during a Multimedia Presentation in a Real Classroom

    ERIC Educational Resources Information Center

    Yang, Fang-Ying; Chang, Chun-Yen; Chien, Wan-Ru; Chien, Yu-Ta; Tseng, Yuen-Hsien

    2013-01-01

    The purpose of the study was to investigate university learners' visual attention during a PowerPoint (PPT) presentation on the topic of "Dinosaurs" in a real classroom. The presentation, which lasted for about 12-15 min, consisted of 12 slides with various text and graphic formats. An instructor gave the presentation to 21 students…

  2. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  3. The interaction between the cognitive style of field dependence and visual presentations in color, monochrome, and line drawings

    NASA Astrophysics Data System (ADS)

    Myers, Robert Gardner

    1997-12-01

    The purpose of this study was to determine whether there is a correlation between the cognitive style of field dependence and the type of visual presentation format used in a computer-based tutorial (color; black and white: or line drawings) when subjects are asked to identify human tissue samples. Two hundred-four college students enrolled in human anatomy and physiology classes at Westmoreland County Community College participated. They were first administered the Group Embedded Figures Test (GEFT) and then were divided into three groups: field-independent (score, 15-18), field-neutral (score, 11-14), and field dependent (score, 0-10). Subjects were randomly assigned to one of the three treatment groups. Instruction was delivered by means of a computer-aided tutorial consisting of text and visuals of human tissue samples. The pretest and posttest consisted of 15 tissue samples, five from each treatment, that were imported into the HyperCardsp{TM} stack and were played using QuickTimesp{TM} movie extensions. A two-way analysis of covariance (ANCOVA) using pretest and posttest scores was used to investigate whether there is a relationship between field dependence and each of the three visual presentation formats. No significant interaction was found between individual subject's relative degree of field dependence and any of the different visual presentation formats used in the computer-aided tutorial module, F(4,194) = 1.78, p =.1335. There was a significant difference between the students' levels of field dependence in terms of their ability to identify human tissue samples, F(2,194) = 5.83, p =.0035. Field-independent subjects scored significantly higher (M = 10.59) on the posttest than subjects who were field-dependent (M = 9.04). There was also a significant difference among the various visual presentation formats, F(2,194) = 3.78, p =.0245. Subjects assigned to the group that received the color visual presentation format scored significantly higher (M = 10.38) on the posttest measure than did those assigned to the group that received the line drawing visual presentation format (8.99).

  4. Hemispheric specialization in quantification processes.

    PubMed

    Pasini, M; Tessari, A

    2001-01-01

    Three experiments were carried out to study hemispheric specialization for subitizing (the rapid enumeration of small patterns) and counting (the serial quantification process based on some formal principles). The experiments consist of numerosity identification of dot patterns presented in one visual field, with a tachistoscopic technique, or eye movements monitored through glasses, and comparison between centrally presented dot patterns and lateralized tachistoscopically presented digits. Our experiments show left visual field advantage in the identification and comparison tasks in the subitizing range, whereas right visual field advantage has been found in the comparison task for the counting range.

  5. The effect of visual representation style in problem-solving: a perspective from cognitive processes.

    PubMed

    Nyamsuren, Enkhbold; Taatgen, Niels A

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving.

  6. The Effect of Visual Representation Style in Problem-Solving: A Perspective from Cognitive Processes

    PubMed Central

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving. PMID:24260415

  7. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli

    2017-01-01

    This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.

  8. Anisotropies in the perceived spatial displacement of motion-defined contours: opposite biases in the upper-left and lower-right visual quadrants.

    PubMed

    Fan, Zhao; Harris, John

    2010-10-12

    In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. KARL: A Knowledge-Assisted Retrieval Language. Presentation visuals. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1985-01-01

    A collection of presentation visuals associated with the companion report entitled KARL: A Knowledge-Assisted Retrieval Language, is presented. Information is given on data retrieval, natural language database front ends, generic design objectives, processing capababilities and the query processing cycle.

  10. Comparing Visual Representations of DNA in Two Multimedia Presentations

    ERIC Educational Resources Information Center

    Cook, Michelle; Wiebe, Eric; Carter, Glenda

    2011-01-01

    This study is part of an ongoing research project examining middle school girls' attention to and interpretation of visual representations of DNA replication. Specifically, this research examined differences between two different versions of a multimedia presentation on DNA, where the second version of the presentation was redesigned as a result…

  11. Manipulating Color and Other Visual Information Influences Picture Naming at Different Levels of Processing: Evidence from Alzheimer Subjects and Normal Controls

    ERIC Educational Resources Information Center

    Zannino, Gian Daniele; Perri, Roberta; Salamone, Giovanna; Di Lorenzo, Concetta; Caltagirone, Carlo; Carlesimo, Giovanni A.

    2010-01-01

    There is now a large body of evidence suggesting that color and photographic detail exert an effect on recognition of visually presented familiar objects. However, an unresolved issue is whether these factors act at the visual, the semantic or lexical level of the recognition process. In the present study, we investigated this issue by having…

  12. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  13. The impact of visual impairment on self-reported visual functioning in Latinos: The Los Angeles Latino Eye Study.

    PubMed

    Globe, Denise R; Wu, Joanne; Azen, Stanley P; Varma, Rohit

    2004-06-01

    To assess the association between presenting binocular visual acuity (VA) and self-reported visual function as measured by the 25-item National Eye Institute Visual Function Questionnaire (NEI-VFQ-25). A population-based, prevalence study of eye disease in Latinos 40 years and older residing in La Puente, California (Los Angeles Latino Eye Study [LALES]). Six thousand three hundred fifty-seven Latinos 40 years and older from 6 census tracts in La Puente. All participants completed a standardized interview, including the NEI-VFQ-25 to measure visual functioning, and a detailed eye examination. Two definitions of visual impairment were used: (1) presenting binocular distance VA of 20/40 or worse and (2) presenting binocular distance VA worse than 20/40. Analysis of variance was used to determine any systematic differences in mean NEI-VFQ-25 scores by visual impairment. Regression analyses were completed (1) to determine the association of age, gender, number of systemic comorbidities, depression, and VA with self-reported visual function and (2) to estimate a visual impairment-related difference for each subscale based on differences in VA. The NEI-VFQ-25 scores in persons with visual impairment. Of the 5287 LALES participants with complete NEI-VFQ-25 data, 6.3% (including 20/40) and 4.2% (excluding 20/40) were visually impaired. In the visually impaired participants, the NEI-VFQ-25 subscale scores ranged from 46.2 (General Health) to 93.8 (Color Vision). In the regression model, only VA, depression, and number of comorbidities were significantly associated with all subscale scores (R(2) ranged from 0.09 for Ocular Pain to 0.33 for the composite score). For 9 of 11 subscales, a 5-point change was equivalent to a 1- or 2-line difference in VA. Relationships were similar regardless of the definition of visual impairment. In this population-based study of Latinos, the NEI-VFQ-25 was sensitive to differences in VA. A 5-point difference on the NEI-VFQ-25 seems to be a minimal criterion for a visual impairment-related difference. Self-reported visual function is essentially unchanged if the definition of visual impairment includes or excludes a VA of 20/40.

  14. Resources for Designing, Selecting and Teaching with Visualizations in the Geoscience Classroom

    NASA Astrophysics Data System (ADS)

    Kirk, K. B.; Manduca, C. A.; Ormand, C. J.; McDaris, J. R.

    2009-12-01

    Geoscience is a highly visual field, and effective use of visualizations can enhance student learning, appeal to students’ emotions and help them acquire skills for interpreting visual information. The On the Cutting Edge website, “Teaching Geoscience with Visualizations” presents information of interest to faculty who are teaching with visualizations, as well as those who are designing visualizations. The website contains best practices for effective visualizations, drawn from the educational literature and from experts in the field. For example, a case is made for careful selection of visualizations so that faculty can align the correct visualization with their teaching goals and audience level. Appropriate visualizations will contain the desired geoscience content without adding extraneous information that may distract or confuse students. Features such as labels, arrows and contextual information can help guide students through imagery and help to explain the relevant concepts. Because students learn by constructing their own mental image of processes, it is helpful to select visualizations that reflect the same type of mental picture that students should create. A host of recommended readings and presentations from the On the Cutting Edge visualization workshops can provide further grounding for the educational uses of visualizations. Several different collections of visualizations, datasets with visualizations and visualization tools are available on the website. Examples include animations of tsunamis, El Nino conditions, braided stream formation and mountain uplift. These collections are grouped by topic and range from simple animations to interactive models. A series of example activities that incorporate visualizations into classroom and laboratory activities illustrate various tactics for using these materials in different types of settings. Activities cover topics such as ocean circulation, land use changes, earthquake simulations and the use of Google Earth to explore geologic processes. These materials can be found at http://serc.carleton.edu/NAGTWorkshops/visualization. Faculty and developers of visualization tools are encouraged to submit teaching activities, references or visualizations to the collections.

  15. Audio-Visual Stimulation in Conjunction with Functional Electrical Stimulation to Address Upper Limb and Lower Limb Movement Disorder.

    PubMed

    Kumar, Deepesh; Verma, Sunny; Bhattacharya, Sutapa; Lahiri, Uttama

    2016-06-13

    Neurological disorders often manifest themselves in the form of movement deficit on the part of the patient. Conventional rehabilitation often used to address these deficits, though powerful are often monotonous in nature. Adequate audio-visual stimulation can prove to be motivational. In the research presented here we indicate the applicability of audio-visual stimulation to rehabilitation exercises to address at least some of the movement deficits for upper and lower limbs. Added to the audio-visual stimulation, we also use Functional Electrical Stimulation (FES). In our presented research we also show the applicability of FES in conjunction with audio-visual stimulation delivered through VR-based platform for grasping skills of patients with movement disorder.

  16. Ipsilateral visual illusion after unilateral posterior cerebral artery infarction: a report of two cases.

    PubMed

    Hong, Yoon Hee; Lim, Tae-Sung; Yong, Suk Woo; Moon, So Young

    2010-08-15

    In cases of unilateral posterior cerebral artery (PCA) infarction, abnormal visual perception in the ipsilateral visual field, which is usually believed to be intact, is not met frequently and may confuse doctors during evaluation. Recently, we observed two patients who presented with contralateral hemianopsia accompanied by ipsilateral visual illusions after acute unilateral PCA infarctions. Their visual illusion was characterized by zooming in, macropsia or micropsia. These symptoms appeared to be related to deficits in size constancy. Lesions of both patients commonly involved the ipsilateral forceps major. The consistent presentation observed in these two patients suggests that dominance of size constancy can be located in the left hemisphere in some individuals. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  17. Complete scanpaths analysis toolbox.

    PubMed

    Augustyniak, Piotr; Mikrut, Zbigniew

    2006-01-01

    This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.

  18. Executive and Perceptual Distraction in Visual Working Memory

    PubMed Central

    2017-01-01

    The contents of visual working memory are likely to reflect the influence of both executive control resources and information present in the environment. We investigated whether executive attention is critical in the ability to exclude unwanted stimuli by introducing concurrent potentially distracting irrelevant items to a visual working memory paradigm, and manipulating executive load using simple or more demanding secondary verbal tasks. Across 7 experiments varying in presentation format, timing, stimulus set, and distractor number, we observed clear disruptive effects of executive load and visual distraction, but relatively minimal evidence supporting an interactive relationship between these factors. These findings are in line with recent evidence using delay-based interference, and suggest that different forms of attentional selection operate relatively independently in visual working memory. PMID:28414499

  19. Space shuttle visual simulation system design study

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A recommendation and a specification for the visual simulation system design for the space shuttle mission simulator are presented. A recommended visual system is described which most nearly meets the visual design requirements. The cost analysis of the recommended system covering design, development, manufacturing, and installation is reported. Four alternate systems are analyzed.

  20. A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology

    ERIC Educational Resources Information Center

    Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…

  1. The Development of a Visual-Perceptual Chemistry Specific (VPCS) Assessment Tool

    ERIC Educational Resources Information Center

    Oliver-Hoyo, Maria; Sloan, Caroline

    2014-01-01

    The development of the Visual-Perceptual Chemistry Specific (VPCS) assessment tool is based on items that align to eight visual-perceptual skills considered as needed by chemistry students. This tool includes a comprehensive range of visual operations and presents items within a chemistry context without requiring content knowledge to solve…

  2. The Role of the Human Extrastriate Visual Cortex in Mirror Symmetry Discrimination: A TMS-Adaptation Study

    ERIC Educational Resources Information Center

    Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha

    2011-01-01

    The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…

  3. Method Matters: Systematic Effects of Testing Procedure on Visual Working Memory Sensitivity

    ERIC Educational Resources Information Center

    Makovski, Tal; Watson, Leah M.; Koutstaal, Wilma; Jiang, Yuhong V.

    2010-01-01

    Visual working memory (WM) is traditionally considered a robust form of visual representation that survives changes in object motion, observer's position, and other visual transients. This article presents data that are inconsistent with the traditional view. We show that memory sensitivity is dramatically influenced by small variations in the…

  4. M-Stream Deficits and Reading-Related Visual Processes in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Boden, Catherine; Giaschi, Deborah

    2007-01-01

    Some visual processing deficits in developmental dyslexia have been attributed to abnormalities in the subcortical M stream and/or the cortical dorsal stream of the visual pathways. The nature of the relationship between these visual deficits and reading is unknown. The purpose of the present article was to characterize reading-related perceptual…

  5. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  6. On the possible roles of microsaccades and drifts in visual perception.

    PubMed

    Ahissar, Ehud; Arieli, Amos; Fried, Moshe; Bonneh, Yoram

    2016-01-01

    During natural viewing large saccades shift the visual gaze from one target to another every few hundreds of milliseconds. The role of microsaccades (MSs), small saccades that show up during long fixations, is still debated. A major debate is whether MSs are used to redirect the visual gaze to a new location or to encode visual information through their movement. We argue that these two functions cannot be optimized simultaneously and present several pieces of evidence suggesting that MSs redirect the visual gaze and that the visual details are sampled and encoded by ocular drifts. We show that drift movements are indeed suitable for visual encoding. Yet, it is not clear to what extent drift movements are controlled by the visual system, and to what extent they interact with saccadic movements. We analyze several possible control schemes for saccadic and drift movements and propose experiments that can discriminate between them. We present the results of preliminary analyses of existing data as a sanity check to the testability of our predictions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Task-dependent engagements of the primary visual cortex during kinesthetic and visual motor imagery.

    PubMed

    Mizuguchi, Nobuaki; Nakamura, Maiko; Kanosue, Kazuyuki

    2017-01-01

    Motor imagery can be divided into kinesthetic and visual aspects. In the present study, we investigated excitability in the corticospinal tract and primary visual cortex (V1) during kinesthetic and visual motor imagery. To accomplish this, we measured motor evoked potentials (MEPs) and probability of phosphene occurrence during the two types of motor imageries of finger tapping. The MEPs and phosphenes were induced by transcranial magnetic stimulation to the primary motor cortex and V1, respectively. The amplitudes of MEPs and probability of phosphene occurrence during motor imagery were normalized based on the values obtained at rest. Corticospinal excitability increased during both kinesthetic and visual motor imagery, while excitability in V1 was increased only during visual motor imagery. These results imply that modulation of cortical excitability during kinesthetic and visual motor imagery is task dependent. The present finding aids in the understanding of the neural mechanisms underlying motor imagery and provides useful information for the use of motor imagery in rehabilitation or motor imagery training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions.

    PubMed

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H; Oğmen, Haluk

    2008-07-15

    The 1990s, the "decade of the brain," witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this "steady-state approach," more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness.

  9. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions

    PubMed Central

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H.; Öğmen, Haluk

    2008-01-01

    The 1990s, the “decade of the brain,” witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this “steady-state approach,” more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness. PMID:20517493

  10. Burden, etiology and predictors of visual impairment among children attending Mulago National Referral Hospital eye clinic, Uganda.

    PubMed

    Kinengyere, Patience; Kizito, Samuel; Kiggundu, John Baptist; Ampaire, Anne; Wabulembo, Geoffrey

    2017-09-01

    Childhood visual impairment (CVI) has not been given due attention. Knowledge of CVI is important in planning preventive measures. The aim of this study was determine the prevalence, etiology and the factors associated with childhood visual impairment among the children attending the eye clinic in Mulago National Referral Hospital. This was a cross sectional hospital based study among 318 children attending the Mulago Hospital eye clinic between January 2015 to March 2015. Ocular and general history was taken and patient examination done. The data generated was entered by Epidata and analyzed by STATA 12. The prevalence of CVI was 42.14%, 134 patients with 49 patients (15.41%) having moderate visual impairment, 45 patients (14.15%) having severe visual impairment and 40 patients (12.58%) presenting with blindness. Significant predictors included; increasing age, delayed developmental milestones and having abnormal corneal, refractive and fundus findings. There is a high burden of visual impairment among children in Uganda. It is vital to screen all the children presenting to hospital for visual impairment. Majority of the causes of the visual impairment are preventable.

  11. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  12. Visual field tunneling in aviators induced by memory demands.

    PubMed

    Williams, L J

    1995-04-01

    Aviators are required rapidly and accurately to process enormous amounts of visual information located foveally and peripherally. The present study, expanding upon an earlier study (Williams, 1988), required young aviators to process within the framework of a single eye fixation a briefly displayed foveally presented memory load while simultaneously trying to identify common peripheral targets presented on the same display at locations up to 4.5 degrees of visual angle from the fixation point. This task, as well as a character classification task (Williams, 1985, 1988), has been shown to be very difficult for nonaviators: It results in a tendency toward tunnel vision. Limited preliminary measurements of peripheral accuracy suggested that aviators might be less susceptible than nonaviators to this visual tunneling. The present study demonstrated moderate susceptibility to cognitively induced tunneling in aviators when the foveal task was sufficiently difficult and reaction time was the principal dependent measure.

  13. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Optimization of Visual Information Presentation for Visual Prosthesis.

    PubMed

    Guo, Fei; Yang, Yuan; Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.

  15. Optimization of Visual Information Presentation for Visual Prosthesis

    PubMed Central

    Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769

  16. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  17. Investigation of the sensitivity of a cross-polarized light visualization system to detect subclinical erythema and dryness in women with vulvovaginitis.

    PubMed

    Farage, Miranda A; Singh, Mukul; Ledger, William J

    2009-07-01

    An enhanced visualization technique using polarized light (Syris v600 enhanced visualization system; Syris Scientific LLC, Gray, ME) detects surface and subsurface ( approximately 1 mm depth) inflammation. We sought to compare the Syris v600 system with unaided visual inspection and colposcopy of the female genitalia. Erythema and dryness of the vulva, introitus, vagina, and cervix were visualized and scored by each method in patients with and without vulvitis. Subsurface visualization was more sensitive in detecting genital erythema and dryness at all sites whether or not symptoms were present. Subsurface inflammation of the introitus, vagina, and cervix only was detected uniquely in women with vulvar vestibulitis syndrome (VVS). A subset of women presenting with VVS exhibited subclinical inflammation of the vulva vestibule and vagina (designated VVS/lichen sclerosus subgroup). Enhanced visualization of the genital epithelial subsurface with cross-polarized light may assist in diagnosing subclinical inflammation in vulvar conditions heretofore characterized as sensory syndromes.

  18. Modulation of visually evoked movement responses in moving virtual environments.

    PubMed

    Reed-Jones, Rebecca J; Vallis, Lori Ann

    2009-01-01

    Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.

  19. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  20. JPL Earth Science Center Visualization Multitouch Table

    NASA Astrophysics Data System (ADS)

    Kim, R.; Dodge, K.; Malhotra, S.; Chang, G.

    2014-12-01

    JPL Earth Science Center Visualization table is a specialized software and hardware to allow multitouch, multiuser, and remote display control to create seamlessly integrated experiences to visualize JPL missions and their remote sensing data. The software is fully GIS capable through time aware OGC WMTS using Lunar Mapping and Modeling Portal as the GIS backend to continuously ingest and retrieve realtime remote sending data and satellite location data. 55 inch and 82 inch unlimited finger count multitouch displays allows multiple users to explore JPL Earth missions and visualize remote sensing data through very intuitive and interactive touch graphical user interface. To improve the integrated experience, Earth Science Center Visualization Table team developed network streaming which allows table software to stream data visualization to near by remote display though computer network. The purpose of this visualization/presentation tool is not only to support earth science operation, but specifically designed for education and public outreach and will significantly contribute to STEM. Our presentation will include overview of our software, hardware, and showcase of our system.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean

    A new field of research, visual analytics, has recently been introduced. This has been defined as “the science of analytical reasoning facilitated by visual interfaces." Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation and dissemination. As researchers begin to develop visual analytic environments, it will be advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work will have on the users who will work in such environments. This paper presents five areas or aspects ofmore » visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined.« less

  2. Emotion Separation Is Completed Early and It Depends on Visual Field Presentation

    PubMed Central

    Liu, Lichan; Ioannides, Andreas A.

    2010-01-01

    It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly. PMID:20339549

  3. Drawing Connections Across Conceptually Related Visual Representations in Science

    NASA Astrophysics Data System (ADS)

    Hansen, Janice

    This dissertation explored beliefs about learning from multiple related visual representations in science, and compared beliefs to learning outcomes. Three research questions were explored: 1) What beliefs do pre-service teachers, non-educators and children have about learning from visual representations? 2) What format of presenting those representations is most effective for learning? And, 3) Can children's ability to process conceptually related science diagrams be enhanced with added support? Three groups of participants, 89 pre-service teachers, 211 adult non-educators, and 385 middle school children, were surveyed about whether they felt related visual representations presented serially or simultaneously would lead to better learning outcomes. Two experiments, one with adults and one with child participants, explored the validity of these beliefs. Pre-service teachers did not endorse either serial or simultaneous related visual representations for their own learning. They were, however, significantly more likely to indicate that children would learn better from serially presented diagrams. In direct contrast to the educators, middle school students believed they would learn better from related visual representations presented simultaneously. Experimental data indicated that the beliefs adult non-educators held about their own learning needs matched learning outcomes. These participants endorsed simultaneous presentation of related diagrams for their own learning. When comparing learning from related diagrams presented simultaneously to learning from the same diagrams presented serially indicate that those in the simultaneously condition were able to create more complex mental models. A second experiment compared children's learning from related diagrams across four randomly-assigned conditions: serial, simultaneous, simultaneous with signaling, and simultaneous with structure mapping support. Providing middle school students with simultaneous related diagrams with support for structure mapping led to a lessened reliance on surface features, and a better understanding of the science concepts presented. These findings suggest that presenting diagrams serially in an effort to reduce cognitive load may not be preferable for learning if making connections across representations, and by extension across science concepts, is desired. Instead, providing simultaneous diagrams with structure mapping support may result in greater attention to the salient relationships between related visual representations as well as between the representations and the science concepts they depict.

  4. [Design and optimization of wireless power and data transmission for visual prosthesis].

    PubMed

    Lei, Xuping; Wu, Kaijie; Zhao, Lei; Chai, Xinyu

    2013-11-01

    Boosting spatial resolution of visual prostheses is an effective method to improve implant subjects' visual perception. However, power consumption of visual implants greatly rises with the increasing number of implanted electrodes. In respond to this trend, visual prostheses need to develop high-efficiency wireless power transmission and high-speed data transmission. This paper presents a review of current research progress on wireless power and data transmission for visual prostheses, analyzes relative principles and requirement, and introduces design methods of power and data transmission.

  5. Cranial Nerve II

    PubMed Central

    Gillig, Paulette Marie; Sanders, Richard D.

    2009-01-01

    This article contains a brief review of the anatomy of the visual system, a survey of diseases of the retina, optic nerve and lesions of the optic chiasm, and other visual field defects of special interest to the psychiatrist. It also includes a presentation of the corticothalamic mechanisms, differential diagnosis, and various manifestations of visual illusions, and simple and complex visual hallucinations, as well as the differential diagnoses of these various visual phenomena. PMID:19855858

  6. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    PubMed

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  7. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  8. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents.

    PubMed

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.

  9. Disentangling fine motor skills' relations to academic achievement: the relative contributions of visual-spatial integration and visual-motor coordination.

    PubMed

    Carlson, Abby G; Rowe, Ellen; Curby, Timothy W

    2013-01-01

    Recent research has established a connection between children's fine motor skills and their academic performance. Previous research has focused on fine motor skills measured prior to elementary school, while the present sample included children ages 5-18 years old, making it possible to examine whether this link remains relevant throughout childhood and adolescence. Furthermore, the majority of research linking fine motor skills and academic achievement has not determined which specific components of fine motor skill are driving this relation. The few studies that have looked at associations of separate fine motor tasks with achievement suggest that copying tasks that tap visual-spatial integration skills are most closely related to achievement. The present study examined two separate elements of fine motor skills--visual-motor coordination and visual-spatial integration--and their associations with various measures of academic achievement. Visual-motor coordination was measured using tracing tasks, while visual-spatial integration was measured using copy-a-figure tasks. After controlling for gender, socioeconomic status, IQ, and visual-motor coordination, and visual-spatial integration explained significant variance in children's math and written expression achievement. Knowing that visual-spatial integration skills are associated with these two achievement domains suggests potential avenues for targeted math and writing interventions for children of all ages.

  10. Retinotopic Maps, Spatial Tuning, and Locations of Human Visual Areas in Surface Coordinates Characterized with Multifocal and Blocked fMRI Designs

    PubMed Central

    Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo

    2012-01-01

    The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626

  11. Neurons in the pigeon caudolateral nidopallium differentiate Pavlovian conditioned stimuli but not their associated reward value in a sign-tracking paradigm

    PubMed Central

    Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.

    2016-01-01

    Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287

  12. Word learning and the cerebral hemispheres: from serial to parallel processing of written words

    PubMed Central

    Ellis, Andrew W.; Ferreira, Roberto; Cathles-Hagan, Polly; Holt, Kathryn; Jarvis, Lisa; Barca, Laura

    2009-01-01

    Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field. PMID:19933140

  13. [Visual Texture Agnosia in Humans].

    PubMed

    Suzuki, Kyoko

    2015-06-01

    Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.

  14. Spontaneous Resolution of Long-Standing Macular Detachment due to Optic Disc Pit with Significant Visual Improvement.

    PubMed

    Parikakis, Efstratios A; Chatziralli, Irini P; Peponis, Vasileios G; Karagiannis, Dimitrios; Stratos, Aimilianos; Tsiotra, Vasileia A; Mitropoulos, Panagiotis G

    2014-01-01

    To report a case of spontaneous resolution of a long-standing serous macular detachment associated with an optic disc pit, leading to significant visual improvement. A 63-year-old female presented with a 6-month history of blurred vision and micropsia in her left eye. Her best-corrected visual acuity was 6/24 in the left eye, and fundoscopy revealed serous macular detachment associated with optic disc pit, which was confirmed by optical coherence tomography (OCT). The patient was offered vitrectomy as a treatment alternative, but she preferred to be reviewed conservatively. Three years after initial presentation, neither macular detachment nor subretinal fluid was evident in OCT, while the inner segment/outer segment (IS/OS) junction line was intact. Her visual acuity was improved from 6/24 to 6/12 in her left eye, remaining stable at the 6-month follow-up after resolution. We present a case of spontaneous resolution of a long-standing macular detachment associated with an optic disc pit with significant visual improvement, postulating that the integrity of the IS/OS junction line may be a prognostic factor for final visual acuity and suggesting OCT as an indicator of visual prognosis and the probable necessity of a surgical management.

  15. Quantum mechanical wavefunction: visualization at undergraduate level

    NASA Astrophysics Data System (ADS)

    Chhabra, Mahima; Das, Ritwick

    2017-01-01

    Quantum mechanics (QM) forms the most crucial ingredient of modern-era physical science curricula at undergraduate level. The abstract ideas involved in QM related concepts pose a challenge towards appropriate visualization as a consequence of their counter-intuitive nature and lack of experiment-assisted visualization tools. At the heart of the quantum mechanical formulation lies the concept of ‘wavefunction’, which forms the basis for understanding the behavior of physical systems. At undergraduate level, the concept of ‘wavefunction’ is introduced in an abstract framework using mathematical tools and therefore opens up an enormous scope for alternative conceptions and erroneous visualization. The present work is an attempt towards exploring the visualization models constructed by undergraduate students for appreciating the concept of ‘wavefunction’. We present a qualitative analysis of the data obtained from administering a questionnaire containing four visualization based questions on the topic of ‘wavefunction’ to a group of ten undergraduate-level students at an institute in India which excels in teaching and research of basic sciences. Based on the written responses, all ten students were interviewed in detail to unravel the exact areas of difficulty in visualization of ‘wavefunction’. The outcome of present study not only reveals the gray areas in students’ conceptualization, but also provides a plausible route to address the issues at the pedagogical level within the classroom.

  16. Visualization case studies : a summary of three transportation applications of visualization

    DOT National Transportation Integrated Search

    2007-11-30

    The three case studies presented in "Visualization Case Studies" are intended to be helpful to transportation agencies in identifying effective techniques for enhancing and streamlining the project development process, including public outreach activ...

  17. Teaching Technology Education to Visually Impaired Students.

    ERIC Educational Resources Information Center

    Mann, Rene

    1987-01-01

    Discusses various types of visual impairments and how the learning environment can be adapted to limit their effect. Presents suggestions for adapting industrial arts laboratory activities to maintain safety standards while allowing the visually impaired to participate. (CH)

  18. Creative Visualization Activities.

    ERIC Educational Resources Information Center

    Fugitt, Eva D.

    1986-01-01

    Presents a series of classroom exercises and activities that stimulate children's creativity through the use of visualization. Discusses procedures for guided imagery and offers some examples of "trips" to imaginary places. Proposes visualization as a warm-up exercise before art lessons. (DR)

  19. View-Dependent Streamline Deformation and Exploration

    PubMed Central

    Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R.; Wong, Pak Chung

    2016-01-01

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely. PMID:26600061

  20. Voxel Datacubes for 3D Visualization in Blender

    NASA Astrophysics Data System (ADS)

    Gárate, Matías

    2017-05-01

    The growth of computational astrophysics and the complexity of multi-dimensional data sets evidences the need for new versatile visualization tools for both the analysis and presentation of the data. In this work, we show how to use the open-source software Blender as a three-dimensional (3D) visualization tool to study and visualize numerical simulation results, focusing on astrophysical hydrodynamic experiments. With a datacube as input, the software can generate a volume rendering of the 3D data, show the evolution of a simulation in time, and do a fly-around camera animation to highlight the points of interest. We explain the process to import simulation outputs into Blender using the voxel data format, and how to set up a visualization scene in the software interface. This method allows scientists to perform a complementary visual analysis of their data and display their results in an appealing way, both for outreach and science presentations.

  1. View-Dependent Streamline Deformation and Exploration.

    PubMed

    Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R; Wong, Pak Chung

    2016-07-01

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.

  2. An Experimental Analysis of Memory Processing

    PubMed Central

    Wright, Anthony A

    2007-01-01

    Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes—tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed. PMID:18047230

  3. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions.

    PubMed

    Boutsen, Frank A; Dvorak, Justin D; Pulusu, Vinay K; Ross, Elliott D

    2017-04-01

    Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning. Published by Elsevier Ltd.

  4. Comparison of the clinical presentation and visual outcome in open globe injuries in adults and children over 30 months.

    PubMed

    Gupta, Arvind; Srinivasan, Renuka; Babu, K Ramesh; Setia, Sajita

    2010-01-01

    To compare the clinical presentation and final visual outcome of open globe injuries in children and adults in a referral hospital over a 30-month period. This is an institutional-based prospective study of open globe injuries cases presenting in the emergency department between July 2003 and December 2005. Patients were divided in 2 groups: group 1, children (2-15 years), and group 2, adults (>15 years). All the patients were admitted and emergency surgical interventions were undertaken. The clinical features at presentation and the final visual acuity are compared. Chi-square and Fisher exact tests were used for statistical analysis. Ninety and 84 patients were included in group 1 and group 2, respectively. The most common places of injuries were home or while playing outdoor games in group 1 (67%) and workplace in group 2 (53.5%). The presenting features were significantly more grave in group 2. These included poor presenting visual acuity (p=0.012), vitreous prolapse (p=0.002), presence of relative afferent pupillary defect (p=0.001), and incidence of endophthalmitis (p=0.004). Time interval between injury and surgical intervention (p=0.018) was better in group 2. Other features, such as presence of hyphema, uveal tissue prolapse, cataract, intraocular foreign body, and length or location of laceration were similar in both groups. The final visual outcome was similar in the groups (p = 0.21), with approximately half of the patients achieving vision of 20/60 or better in each group. The majority of injuries in children and adults occurred in their homes or workplaces, respectively. Although the clinical presentations of open globe injuries were significantly more grave in adults than in children, the final visual outcomes were similar.

  5. Evaluation of a Public Child Eye Health Tertiary Facility for Pediatric Cataract in Southern Nigeria I: Visual Acuity Outcome

    PubMed Central

    Duke, Roseline E.; Adio, Adedayo; Oparah, Sidney K.; Odey, Friday; Eyo, Okon A.

    2016-01-01

    Purpose: A retrospective study of the outcome of congenital and developmental cataract surgery was conducted in a public child eye health tertiary facility in children <16 years of age in Southern Nigeria, as part of an evaluation. Materials and Method: Manual Small Incision Cataract Surgery with or without anterior vitrectomy was performed. The outcome measures were visual acuity (VA) and change (gain) in visual acuity. The age of the child at onset, duration of delay in presentation, ocular co-morbidity, non ocular co-morbidity, gender, and pre operative visual acuity were matched with postoperative visual acuity. A total of 66 children were studied for a period of six weeks following surgery. Results: Forty eight (72.7%) children had bilateral congenital cataracts and 18 (27.3%) children had bilateral developmental cataracts. There were 38(57.6%) males and 28 (42.4%) females in the study. Thirty Five (53%) children had good visual outcome (normal vision range 6/6/ -6/18) post-operatively. The number of children with blindness (vision <3/60) decreased from 61 (92.4%) pre-operatively to 4 (6.1%) post-operatively. Post operative complication occurred in 6.8% of cases six week after surgery. Delayed presentation had an inverse relationship with change (gain) in visual acuity (r = - 0.342; p-value = 0.005). Pre-operative visual acuity had a positive relationship with post operative change (gain) in visual acuity (r = 0.618; p-value = 0.000). Conclusion: Predictors of change in visual acuity in our study were; delayed presentation and pre-operative VA. Cataract surgery in children showed clinical benefit. PMID:27347247

  6. Investigating the role of visual and auditory search in reading and developmental dyslexia

    PubMed Central

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014

  7. Investigating the role of visual and auditory search in reading and developmental dyslexia.

    PubMed

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.

  8. Functional and visual acuity outcomes of cataract surgery in Timor-Leste (East Timor).

    PubMed

    Naidu, Girish; Correia, Marcelino; Nirmalan, Praveen; Verma, Nitin; Thomas, Ravi

    2014-12-01

    To report functional outcomes following cataract surgery in Timor-Leste. Pre- and post-intervention study measuring visual function improvement following cataract surgery. Presenting visual acuity (VA) was measured and visual function documented using the Indian vision function questionnaire (IND-VFQ). All 174 persons undergoing cataract surgery from November 2009 to January 2011 in Timor-Leste were included. Mean age was 65.4 years; 113 (64.9%) were male, 143 (82.1%) were from a rural background and 151 (86.8%) were illiterate. Pre-operatively, 77 of 174 patients (44.3%, 95% confidence interval, CI, 37.0-51.7%) were blind (VA ≤3/60), 77 (44.3%, 95% CI 37.0-51.7%) were visually impaired (VA <6/18->3/60), while 20 (11.5%, 95% CI 7.4-16.9%) had presenting acuity ≥6/18 in the better eye. Following surgery, significant improvement in visual function was demonstrated by an effect size of 2.8, 3.7 and 3.9 in the domains of general functioning, psychosocial impact and visual symptoms, respectively. Four weeks following surgery, 85 patients (48.9%, 95% CI 41.5-66.3%) had a presenting VA ≥6/18, 74 (42.5%, 95% CI 35.3-45.9%) were visually impaired and 15 (8.6%, 95% CI 5.0-13.6%) were blind. IND-VFQ improvement occurred even in patients remaining visually impaired or blind following surgery. In this setting, cataract surgery led to a significant improvement in visual function but the VA results did not meet World Health Organization quality criteria. IND-VFQ results, although complementary to clinical VA outcomes did not, in isolation, reflect the need to improve program quality.

  9. Creating Visual Aids with Graphic Organisers on an Infinite Canvas--The Impact on the Presenter

    ERIC Educational Resources Information Center

    Casteleyn, Jordi; Mottart, Andre; Valcke, Martin

    2015-01-01

    Instead of the traditional set of slides, the visual aids of a presentation can now be graphic organisers (concept maps, knowledge maps, mind maps) on an infinite canvas. Constructing graphic organisers has a beneficial impact on learning, but this topic has not been studied in the context of giving a presentation. The present study examined this…

  10. Development of visual category selectivity in ventral visual cortex does not require visual experience

    PubMed Central

    van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.

    2017-01-01

    To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127

  11. Hemispheric asymmetry of liking for representational and abstract paintings.

    PubMed

    Nadal, Marcos; Schiavi, Susanna; Cattaneo, Zaira

    2017-10-13

    Although the neural correlates of the appreciation of aesthetic qualities have been the target of much research in the past decade, few experiments have explored the hemispheric asymmetries in underlying processes. In this study, we used a divided visual field paradigm to test for hemispheric asymmetries in men and women's preference for abstract and representational artworks. Both male and female participants liked representational paintings more when presented in the right visual field, whereas preference for abstract paintings was unaffected by presentation hemifield. We hypothesize that this result reflects a facilitation of the sort of visual processes relevant to laypeople's liking for art-specifically, local processing of highly informative object features-when artworks are presented in the right visual field, given the left hemisphere's advantage in processing such features.

  12. Outcome of endoscopic trans-ethmosphenoid optic canal decompression for indirect traumatic optic neuropathy in children.

    PubMed

    Yu, Bo; Chen, Yingbai; Ma, Yingjie; Tu, Yunhai; Wu, Wencan

    2018-06-26

    To evaluate the safety and outcomes of endoscopic trans-ethmosphenoid optic canal decompression (ETOCD) for children with indirect traumatic optic neuropathy (ITON). From July 1st, 2008 to July 1st, 2015, 62 children diagnosed with ITON who underwent ETOCD were reviewed. Main outcome measure was improvement in visual acuity after treatment. Altogether 62 children (62 eyes) with a mean age of 11.26 ± 4.14 years were included. Thirty-three (53.2%) of them had residual vision before surgery while 29 (46.8%) had no light perception (NLP). The overall visual acuity improvement rate after surgery was 54.84%. The improvement rate of patients with residual vision (69.70%) was significant higher than that of patients with no light perception (NLP) (37.9%) (P = 0.012). However, no significant difference was shown among patients with different residual vision (P = 0.630). Presence of orbital and/ or optic canal fracture and hemorrhage within the post-ethmoid and/or sphenoid sinus resulted in poor postoperative visual acuity, duration of presenting complaints did not affect final visual acuity or did not effect outcomes. Intervention performed in children presenting even after 7 days from the injury did not influence the final visual outcome. Three patients developed cerebrospinal fluid rhinorrhea and one encountered cavernous sinus hemorrhage during surgery. No other severe complications were observed. Children with residual vision had better postoperative visual prognosis and benefited more from ETOCD than children with NLP. Intervention performed in children presenting even after 7 days from the injury did not influence the final visual outcome, however, this needs to be reassessed in children presenting long after the injury.Treatment should still be recommended even for cases of delayed presentation to hospital.

  13. Detecting delay in visual feedback of an action as a monitor of self recognition.

    PubMed

    Hoover, Adria E N; Harris, Laurence R

    2012-10-01

    How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.

  14. The Brain and Learning: Examining the Connection between Brain Activity, Spatial Intelligence, and Learning Outcomes in Online Visual Instruction

    ERIC Educational Resources Information Center

    Lee, Hyangsook

    2013-01-01

    The purpose of the study was to compare 2D and 3D visual presentation styles, both still frame and animation, on subjects' brain activity measured by the amplitude of EEG alpha wave and on their recall to see if alpha power and recall differ significantly by depth and movement of visual presentation style and by spatial intelligence. In addition,…

  15. Effect of Varied Computer Based Presentation Sequences on Facilitating Student Achievement.

    ERIC Educational Resources Information Center

    Noonen, Ann; Dwyer, Francis M.

    1994-01-01

    Examines the effectiveness of visual illustrations in computer-based education, the effect of order of visual presentation, and whether screen design affects students' use of graphics and text. Results indicate that order of presentation and choice of review did not influence student achievement; however, when given a choice, students selected the…

  16. Visual Communication in PowerPoint Presentations in Applied Linguistics

    ERIC Educational Resources Information Center

    Kmalvand, Ayad

    2014-01-01

    PowerPoint knowledge presentation as a digital genre has established itself as the main software by which the findings of theses are disseminated in the academic settings. Although the importance of PowerPoint presentations is typically realized in academic settings like lectures, conferences, and seminars, the study of the visual features of…

  17. Effects of Presentation Mode on Veridical and False Memory in Individuals with Intellectual Disability

    ERIC Educational Resources Information Center

    Carlin, Michael; Toglia, Michael P.; Belmonte, Colleen; DiMeglio, Chiara

    2012-01-01

    In the present study the effects of visual, auditory, and audio-visual presentation formats on memory for thematically constructed lists were assessed in individuals with intellectual disability and mental age-matched children. The auditory recognition test included target items, unrelated foils, and two types of semantic lures: critical related…

  18. When apperceptive agnosia is explained by a deficit of primary visual processing.

    PubMed

    Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta

    2014-03-01

    Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Intraorbital foreign body projectile as a consideration for unilateral pupillary defect

    PubMed Central

    2012-01-01

    Intraorbital foreign bodies are frequently the result of high-velocity injuries with varying clinical presentations. The resultant diagnosis, management, and outcome depend on the type of foreign body present, anatomical location, tissue disruption, and symptomatology. A patient who presented to the Emergency Department with a large intraorbital foreign body projectile that was not evident clinically, but found incidentally on computed tomography and subsequent plain films is reported. The emergency room physician needs to be aware of the differential diagnosis of a unilateral irregular pupil with or without visual acuity changes. The differential diagnosis for any trauma patient with an irregular pupil with significant visual loss must include intraorbital foreign body and associated injury to the optic nerve directly or via orbital compartment syndrome secondary to hemorrhage and/or edema. Patients with significantly decreased visual acuity may benefit from emergent surgical intervention. In patients with intact visual acuity, the patient must be monitored closely for any visual changes as this may require emergent surgical intervention. PMID:22390406

  20. Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions

    PubMed Central

    Elmannai, Wafa; Elleithy, Khaled

    2017-01-01

    The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visually-impaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people. PMID:28287451

  1. Implicit recognition based on lateralized perceptual fluency.

    PubMed

    Vargas, Iliana M; Voss, Joel L; Paller, Ken A

    2012-02-06

    In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.

  2. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  3. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  4. ICASE/LaRC Symposium on Visualizing Time-Varying Data

    NASA Technical Reports Server (NTRS)

    Banks, D. C. (Editor); Crockett, T. W. (Editor); Stacy, K. (Editor)

    1996-01-01

    Time-varying datasets present difficult problems for both analysis and visualization. For example, the data may be terabytes in size, distributed across mass storage systems at several sites, with time scales ranging from femtoseconds to eons. In response to these challenges, ICASE and NASA Langley Research Center, in cooperation with ACM SIGGRAPH, organized the first symposium on visualizing time-varying data. The purpose was to bring the producers of time-varying data together with visualization specialists to assess open issues in the field, present new solutions, and encourage collaborative problem-solving. These proceedings contain the peer-reviewed papers which were presented at the symposium. They cover a broad range of topics, from methods for modeling and compressing data to systems for visualizing CFD simulations and World Wide Web traffic. Because the subject matter is inherently dynamic, a paper proceedings cannot adequately convey all aspects of the work. The accompanying video proceedings provide additional context for several of the papers.

  5. Clinical implications of parallel visual pathways.

    PubMed

    Bassi, C J; Lehmkuhle, S

    1990-02-01

    Visual information travels from the retina to visual cortical areas along at least two parallel pathways. In this paper, anatomical and physiological evidence is presented to demonstrate the existence of, and trace these two pathways throughout the visual systems of the cat, primate, and human. Physiological and behavioral experiments are discussed which establish that these two pathways are differentially sensitive to stimuli that vary in spatial and temporal frequency. One pathway (M-pathway) is more sensitive to coarse visual form that is modulated or moving at fast rates, whereas the other pathway (P-pathway) is more sensitive to spatial detail that is stationary or moving at slow rates. This difference between the M- and P-pathways is related to some spatial and temporal effects observed in humans. Furthermore, evidence is presented that certain diseases selectively comprise the functioning of M- or P-pathways (i.e., glaucoma, Alzheimer's disease, and anisometropic amblyopia), and some of the spatial and temporal deficits observed in these patients are presented within the context of the dysfunction of the M- or P-pathway.

  6. Shades of yellow: interactive effects of visual and odour cues in a pest beetle

    PubMed Central

    Stevenson, Philip C.; Belmain, Steven R.

    2016-01-01

    Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707

  7. High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment

    NASA Astrophysics Data System (ADS)

    Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.

    2006-12-01

    The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.

  8. Do Dyslexic Individuals Present a Reduced Visual Attention Span? Evidence from Visual Recognition Tasks of Non-Verbal Multi-Character Arrays

    ERIC Educational Resources Information Center

    Yeari, Menahem; Isser, Michal; Schiff, Rachel

    2017-01-01

    A controversy has recently developed regarding the hypothesis that developmental dyslexia may be caused, in some cases, by a reduced visual attention span (VAS). To examine this hypothesis, independent of phonological abilities, researchers tested the ability of dyslexic participants to recognize arrays of unfamiliar visual characters. Employing…

  9. Using the PyMOL Application to Reinforce Visual Understanding of Protein Structure

    ERIC Educational Resources Information Center

    Rigsby, Rachel E.; Parker, Alison B.

    2016-01-01

    Visualization of chemical concepts can be challenging for many students. This is arguably a critical skill for beginning students of biochemistry to develop, since new information is often presented visually in the form of textbook figures. It is recommended that visual literacy be explicitly taught in the classroom rather than assuming that…

  10. The Reliability of the CVI Range: A Functional Vision Assessment for Children with Cortical Visual Impairment

    ERIC Educational Resources Information Center

    Newcomb, Sandra

    2010-01-01

    Children who are identified as visually impaired frequently have a functional vision assessment as one way to determine how their visual impairment affects their educational performance. The CVI Range is a functional vision assessment for children with cortical visual impairment. The purpose of the study presented here was to examine the…

  11. An Interactive Approach to Learning and Teaching in Visual Arts Education

    ERIC Educational Resources Information Center

    Tomljenovic, Zlata

    2015-01-01

    The present research focuses on modernising the approach to learning and teaching the visual arts in teaching practice, as well as examining the performance of an interactive approach to learning and teaching in visual arts classes with the use of a combination of general and specific (visual arts) teaching methods. The study uses quantitative…

  12. The Effects of Visual Stimuli on the Spoken Narrative Performance of School-Age African American Children

    ERIC Educational Resources Information Center

    Mills, Monique T.

    2015-01-01

    Purpose: This study investigated the fictional narrative performance of school-age African American children across 3 elicitation contexts that differed in the type of visual stimulus presented. Method: A total of 54 children in Grades 2 through 5 produced narratives across 3 different visual conditions: no visual, picture sequence, and single…

  13. Automatic Guidance of Visual Attention from Verbal Working Memory

    ERIC Educational Resources Information Center

    Soto, David; Humphreys, Glyn W.

    2007-01-01

    Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize,…

  14. Visual Environments for CFD Research

    NASA Technical Reports Server (NTRS)

    Watson, Val; George, Michael W. (Technical Monitor)

    1994-01-01

    This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.

  15. Designing Media for Visually-Impaired Users of Refreshable Touch Displays: Possibilities and Pitfalls.

    PubMed

    O'Modhrain, Sile; Giudice, Nicholas A; Gardner, John A; Legge, Gordon E

    2015-01-01

    This paper discusses issues of importance to designers of media for visually impaired users. The paper considers the influence of human factors on the effectiveness of presentation as well as the strengths and weaknesses of tactile, vibrotactile, haptic, and multimodal methods of rendering maps, graphs, and models. The authors, all of whom are visually impaired researchers in this domain, present findings from their own work and work of many others who have contributed to the current understanding of how to prepare and render images for both hard-copy and technology-mediated presentation of Braille and tangible graphics.

  16. The Effects of Mirror Feedback during Target Directed Movements on Ipsilateral Corticospinal Excitability

    PubMed Central

    Yarossi, Mathew; Manuweera, Thushini; Adamovich, Sergei V.; Tunik, Eugene

    2017-01-01

    Mirror visual feedback (MVF) training is a promising technique to promote activation in the lesioned hemisphere following stroke, and aid recovery. However, current outcomes of MVF training are mixed, in part, due to variability in the task undertaken during MVF. The present study investigated the hypothesis that movements directed toward visual targets may enhance MVF modulation of motor cortex (M1) excitability ipsilateral to the trained hand compared to movements without visual targets. Ten healthy subjects participated in a 2 × 2 factorial design in which feedback (veridical, mirror) and presence of a visual target (target present, target absent) for a right index-finger flexion task were systematically manipulated in a virtual environment. To measure M1 excitability, transcranial magnetic stimulation (TMS) was applied to the hemisphere ipsilateral to the trained hand to elicit motor evoked potentials (MEPs) in the untrained first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles at rest prior to and following each of four 2-min blocks of 30 movements (B1–B4). Targeted movement kinematics without visual feedback was measured before and after training to assess learning and transfer. FDI MEPs were decreased in B1 and B2 when movements were made with veridical feedback and visual targets were absent. FDI MEPs were decreased in B2 and B3 when movements were made with mirror feedback and visual targets were absent. FDI MEPs were increased in B3 when movements were made with mirror feedback and visual targets were present. Significant MEP changes were not present for the uninvolved ADM, suggesting a task-specific effect. Analysis of kinematics revealed learning occurred in visual target-directed conditions, but transfer was not sensitive to mirror feedback. Results are discussed with respect to current theoretical mechanisms underlying MVF-induced changes in ipsilateral excitability. PMID:28553218

  17. Representations of temporal information in short-term memory: Are they modality-specific?

    PubMed

    Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M

    2016-10-01

    Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    PubMed

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  19. Earth Science Multimedia Theater

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.

    1998-01-01

    The presentation will begin with the latest 1998 NASA Earth Science Vision for the next 25 years. A compilation of the 10 days of animations of Hurricane Georges which were supplied daily on NASA to Network television will be shown. NASA's visualizations of Hurricane Bonnie which appeared in the Sept 7 1998 issue of TIME magazine. Highlights will be shown from the NASA hurricane visualization resource video tape that has been used repeatedly this season on network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1 -min GOES images that will appear in the October BAMS. The visualizations are produced by the Goddard Visualization & Analysis Laboratory, and Scientific Visualization Studio, as well as other Goddard and NASA groups using NASA, NOAA, ESA, and NASDA Earth science datasets. Visualizations will be shown from the "Digital-HyperRes-Panorama" Earth Science ETheater'98 recently presented in Tokyo, Paris and Phoenix. The presentation in Paris used a SGI/CRAY Onyx Infinite Reality Super Graphics Workstation at 2560 X 1024 resolution with dual synchronized video Epson 71 00 projectors on a 20ft wide screen. Earth Science Electronic Theater '999 is being prepared for a December 1 st showing at NASA HQ in Washington and January presentation at the AMS meetings in Dallas. The 1999 version of the Etheater will be triple wide with at resolution of 3840 X 1024 on a 60 ft wide screen. Visualizations will also be featured from the new Earth Today Exhibit which was opened by Vice President Gore on July 2, 1998 at the Smithsonian Air & Space Museum in Washington, as well as those presented for possible use at the American Museum of Natural History (NYC), Disney EPCOT, and other venues. New methods are demonstrated for visualizing, interpreting, comparing, organizing and analyzing immense Hyperimage remote sensing datasets and three dimensional numerical model results. We call the data from many new Earth sensing satellites, Hyperimage datasets, because they have such high resolution in the spectral, temporal, spatial, and dynamic range domains. The traditional numerical spreadsheet paradigm has been extended to develop a scientific visualization approach for processing Hyperimage datasets and 3D model results interactively. The advantages of extending the powerful spreadsheet style of computation to multiple sets of images and organizing image processing were demonstrated using the Distributed Image SpreadSheet (DISS).

  20. Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions

    PubMed Central

    Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang

    2012-01-01

    Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749

  1. Audio-Visual Situational Awareness for General Aviation Pilots

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Weather is one of the major causes of general aviation accidents. Researchers are addressing this problem from various perspectives including improving meteorological forecasting techniques, collecting additional weather data automatically via on-board sensors and "flight" modems, and improving weather data dissemination and presentation. We approach the problem from the improved presentation perspective and propose weather visualization and interaction methods tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilot's situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.

  2. Factors affecting the visual outcome in hyphema management in Guinness Eye Center Onitsha.

    PubMed

    Onyekwe, L O

    2008-12-01

    This study aims of determining the complications, outcome of hyphema treatment and recommend ways of enhancing good visual outcome. The records of all cases of hyphema seen from 1st January 2001 to 31st December 2005 were reviewed retrospectively. The variables analyzed were the biodata of all the patients, the agents causing hyphema, associated injuries and complications. Visual acuity at presentation, discharge and last visit was analyzed. Seventy four patients that had hyphema were reviewed. The male:female ratio was 3.5:1. Trauma was predominantly main cause of hyphema. The common agents of injury include whip (23.2%) and fist (18.8%). The common complications were secondary glaucoma (52.5%), corneal siderosis (30.0%) and rebleeding (10%). Visual outcome is related to time ofpresentation, complications and treatment. Significant improvement was achieved following treatment. Hyphema is a common complication of eye injuries. It is commonly associated with other eye injuries like vitreous haemorrhage and cataract. Common complications include secondary glaucoma, corneal siderosis and rebleeding. Visual outcome is dependent on time of presentation, severity and nature of complications. Visual outcome can be improved by early presentation and detection of complications and appropriate treatment.

  3. Visual selective attention and reading efficiency are related in children.

    PubMed

    Casco, C; Tressoldi, P E; Dellantonio, A

    1998-09-01

    We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.

  4. Relative Spatial Frequency Processing Drives Hemispheric Asymmetry in Conscious Awareness

    PubMed Central

    Piazza, Elise A.; Silver, Michael A.

    2017-01-01

    Visual stimuli with different spatial frequencies (SFs) are processed asymmetrically in the two cerebral hemispheres. Specifically, low SFs are processed relatively more efficiently in the right hemisphere than the left hemisphere, whereas high SFs show the opposite pattern. In this study, we ask whether these differences between the two hemispheres reflect a low-level division that is based on absolute SF values or a flexible comparison of the SFs in the visual environment at any given time. In a recent study, we showed that conscious awareness of SF information (i.e., visual perceptual selection from multiple SFs simultaneously present in the environment) differs between the two hemispheres. Building upon that result, here we employed binocular rivalry to test whether this hemispheric asymmetry is due to absolute or relative SF processing. In each trial, participants viewed a pair of rivalrous orthogonal gratings of different SFs, presented either to the left or right of central fixation, and continuously reported which grating they perceived. We found that the hemispheric asymmetry in perception is significantly influenced by relative processing of the SFs of the simultaneously presented stimuli. For example, when a medium SF grating and a higher SF grating were presented as a rivalry pair, subjects were more likely to report that they initially perceived the medium SF grating when the rivalry pair was presented in the left visual hemifield (right hemisphere), compared to the right hemifield. However, this same medium SF grating, when it was paired in rivalry with a lower SF grating, was more likely to be perceptually selected when it was in the right visual hemifield (left hemisphere). Thus, the visual system’s classification of a given SF as “low” or “high” (and therefore, which hemisphere preferentially processes that SF) depends on the other SFs that are present, demonstrating that relative SF processing contributes to hemispheric differences in visual perceptual selection. PMID:28469585

  5. Detecting and Remembering Simultaneous Pictures in a Rapid Serial Visual Presentation

    ERIC Educational Resources Information Center

    Potter, Mary C.; Fox, Laura F.

    2009-01-01

    Viewers can easily spot a target picture in a rapid serial visual presentation (RSVP), but can they do so if more than 1 picture is presented simultaneously? Up to 4 pictures were presented on each RSVP frame, for 240 to 720 ms/frame. In a detection task, the target was verbally specified before each trial (e.g., "man with violin"); in a…

  6. Data mining and visualization from planetary missions: the VESPA-Europlanet2020 activity

    NASA Astrophysics Data System (ADS)

    Longobardo, Andrea; Capria, Maria Teresa; Zinzi, Angelo; Ivanovski, Stavro; Giardino, Marco; di Persio, Giuseppe; Fonte, Sergio; Palomba, Ernesto; Antonelli, Lucio Angelo; Fonte, Sergio; Giommi, Paolo; Europlanet VESPA 2020 Team

    2017-06-01

    This paper presents the VESPA (Virtual European Solar and Planetary Access) activity, developed in the context of the Europlanet 2020 Horizon project, aimed at providing tools for analysis and visualization of planetary data provided by space missions. In particular, the activity is focused on minor bodies of the Solar System.The structure of the computation node, the algorithms developed for analysis of planetary surfaces and cometary comae and the tools for data visualization are presented.

  7. Visual Data Exploration and Analysis - Report on the Visualization Breakout Session of the SCaLeS Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Frank, Randy; Fulcomer, Sam

    Scientific visualization is the transformation of abstract information into images, and it plays an integral role in the scientific process by facilitating insight into observed or simulated phenomena. Visualization as a discipline spans many research areas from computer science, cognitive psychology and even art. Yet the most successful visualization applications are created when close synergistic interactions with domain scientists are part of the algorithmic design and implementation process, leading to visual representations with clear scientific meaning. Visualization is used to explore, to debug, to gain understanding, and as an analysis tool. Visualization is literally everywhere--images are present in this report,more » on television, on the web, in books and magazines--the common theme is the ability to present information visually that is rapidly assimilated by human observers, and transformed into understanding or insight. As an indispensable part a modern science laboratory, visualization is akin to the biologist's microscope or the electrical engineer's oscilloscope. Whereas the microscope is limited to small specimens or use of optics to focus light, the power of scientific visualization is virtually limitless: visualization provides the means to examine data that can be at galactic or atomic scales, or at any size in between. Unlike the traditional scientific tools for visual inspection, visualization offers the means to ''see the unseeable.'' Trends in demographics or changes in levels of atmospheric CO{sub 2} as a function of greenhouse gas emissions are familiar examples of such unseeable phenomena. Over time, visualization techniques evolve in response to scientific need. Each scientific discipline has its ''own language,'' verbal and visual, used for communication. The visual language for depicting electrical circuits is much different than the visual language for depicting theoretical molecules or trends in the stock market. There is no ''one visualization too'' that can serve as a panacea for all science disciplines. Instead, visualization researchers work hand in hand with domain scientists as part of the scientific research process to define, create, adapt and refine software that ''speaks the visual language'' of each scientific domain.« less

  8. Learning of goal-relevant and -irrelevant complex visual sequences in human V1.

    PubMed

    Rosenthal, Clive R; Mallik, Indira; Caballero-Gaudes, Cesar; Sereno, Martin I; Soto, David

    2018-06-12

    Learning and memory are supported by a network involving the medial temporal lobe and linked neocortical regions. Emerging evidence indicates that primary visual cortex (i.e., V1) may contribute to recognition memory, but this has been tested only with a single visuospatial sequence as the target memorandum. The present study used functional magnetic resonance imaging to investigate whether human V1 can support the learning of multiple, concurrent complex visual sequences involving discontinous (second-order) associations. Two peripheral, goal-irrelevant but structured sequences of orientated gratings appeared simultaneously in fixed locations of the right and left visual fields alongside a central, goal-relevant sequence that was in the focus of spatial attention. Pseudorandom sequences were introduced at multiple intervals during the presentation of the three structured visual sequences to provide an online measure of sequence-specific knowledge at each retinotopic location. We found that a network involving the precuneus and V1 was involved in learning the structured sequence presented at central fixation, whereas right V1 was modulated by repeated exposure to the concurrent structured sequence presented in the left visual field. The same result was not found in left V1. These results indicate for the first time that human V1 can support the learning of multiple concurrent sequences involving complex discontinuous inter-item associations, even peripheral sequences that are goal-irrelevant. Copyright © 2018. Published by Elsevier Inc.

  9. Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object.

    PubMed

    Yamasaki, Daiki; Miyoshi, Kiyofumi; Altmann, Christian F; Ashida, Hiroshi

    2018-07-01

    In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.

  10. Upright face-preferential high-gamma responses in lower-order visual areas: evidence from intracranial recordings in children

    PubMed Central

    Matsuzaki, Naoyuki; Schwarzlose, Rebecca F.; Nishida, Masaaki; Ofen, Noa; Asano, Eishi

    2015-01-01

    Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli. PMID:25579446

  11. Visual Compositions and Language Development.

    ERIC Educational Resources Information Center

    Sinatra, Richard

    1981-01-01

    Presents an approach for improving verbal development by using organized slide shows to produce visual/verbal interaction in classroom. Suggests strength of visual involvement is that it provides a procedure for language discovery while achieving cooperation between right and left brain processing. (Author/BK)

  12. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  13. Visual acuity and refractive errors in a suburban Danish population: Inter99 Eye Study.

    PubMed

    Kessel, Line; Hougaard, Jesper Leth; Mortensen, Claus; Jørgensen, Torben; Lund-Andersen, Henrik; Larsen, Michael

    2004-02-01

    The present study was performed as part of an epidemiological study, the Inter99 Eye Study. The aim of the study was to describe refractive errors and visual acuity (VA) in a suburban Danish population. The Inter99 Eye Study comprised 970 subjects aged 30-60 years and included a random control group as well as groups at high risk for ischaemic heart disease and diabetes mellitus. The present study presents VAs and refractive data from the control group (n = 502). All subjects completed a detailed questionnaire and underwent a standardized general physical and ophthalmic examination including determination of best corrected VA and subjective refractioning. Visual acuity

  14. Focused and divided attention abilities in the acute phase of recovery from moderate to severe traumatic brain injury.

    PubMed

    Robertson, Kayela; Schmitter-Edgecombe, Maureen

    2017-01-01

    Impairments in attention following traumatic brain injury (TBI) can significantly impact recovery and rehabilitation effectiveness. This study investigated the multi-faceted construct of selective attention following TBI, highlighting the differences on visual nonsearch (focused attention) and search (divided attention) tasks. Participants were 30 individuals with moderate to severe TBI who were tested acutely (i.e. following emergence from PTA) and 30 age- and education-matched controls. Participants were presented with visual displays that contained either two or eight items. In the focused attention, nonsearch condition, the location of the target (if present) was cued with a peripheral arrow prior to presentation of the visual displays. In the divided attention, search condition, no spatial cue was provided prior to presentation of the visual displays. The results revealed intact focused, nonsearch, attention abilities in the acute phase of TBI recovery. In contrast, when no spatial cue was provided (divided attention condition), participants with TBI demonstrated slower visual search compared to the control group. The results of this study suggest that capitalizing on intact focused attention abilities by allocating attention during cognitively demanding tasks may help to reduce mental workload and improve rehabilitation effectiveness.

  15. The contents of visual working memory reduce uncertainty during visual search.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2011-05-01

    Information held in visual working memory (VWM) influences the allocation of attention during visual search, with targets matching the contents of VWM receiving processing benefits over those that do not. Such an effect could arise from multiple mechanisms: First, it is possible that the contents of working memory enhance the perceptual representation of the target. Alternatively, it is possible that when a target is presented among distractor items, the contents of working memory operate postperceptually to reduce uncertainty about the location of the target. In both cases, a match between the contents of VWM and the target should lead to facilitated processing. However, each effect makes distinct predictions regarding set-size manipulations; whereas perceptual enhancement accounts predict processing benefits regardless of set size, uncertainty reduction accounts predict benefits only with set sizes larger than 1, when there is uncertainty regarding the target location. In the present study, in which briefly presented, masked targets were presented in isolation, there was a negligible effect of the information held in VWM on target discrimination. However, in displays containing multiple masked items, information held in VWM strongly affected target discrimination. These results argue that working memory representations act at a postperceptual level to reduce uncertainty during visual search.

  16. Branch retinal arterial occlusion.

    PubMed

    Subedi, S; Shrestha, C

    2010-01-01

    Retinal arterial occlusion is an ocular emergency in which visual prognosis is poor mostly due to late presentation of the patient and macular involvement. The casee described, in this report is ane incidence of Branch Retinal Arterial Occlusion in a 22 year old female with grade II Mitral Regurgitation. The patiente presented witha complaint of painless, diminution of vision in the right eyn. She also presented with perception of black shadow in the superior visual fiel n of the same eye5 for five days. There was no significant systemic ord personal history. Her visual acuity at presentation was 6/60 and 6/6 in the right and left eyes,y which did not improve with glasses or pin-hole. Anterior segment including papillary reaction was normal in both eyes while Fundus examination of the right eye revealed retinal whitening inside the inferotemporal vascular arcade that was encroaching foveolar avascular zone. Visual field defect was detected at superonasally inside arhade but Fundus Fluorescence Angiography was normal. An echoycardiograph revealed grade II Mitral Regurgitation. The patient was kept on observation and after two2 days of follow-up, vision in the right eye was improved to 6/6 unaided but visual field defect was remained same.

  17. A review of uncertainty visualization within the IPCC reports

    NASA Astrophysics Data System (ADS)

    Nocke, Thomas; Reusser, Dominik; Wrobel, Markus

    2015-04-01

    Results derived from climate model simulations confront non-expert users with a variety of uncertainties. This gives rise to the challenge that the scientific information must be communicated such that it can be easily understood, however, the complexity of the science behind is still incorporated. With respect to the assessment reports of the IPCC, the situation is even more complicated, because heterogeneous sources and multiple types of uncertainties need to be compiled together. Within this work, we systematically (1) analyzed the visual representation of uncertainties in the IPCC AR4 and AR5 reports, and (2) executed a questionnaire to evaluate how different user groups such as decision-makers and teachers understand these uncertainty visualizations. Within the first step, we classified visual uncertainty metaphors for spatial, temporal and abstract representations. As a result, we clearly identified a high complexity of the IPCC visualizations compared to standard presentation graphics, sometimes even integrating two or more uncertainty classes / measures together with the "certain" (mean) information. Further we identified complex written uncertainty explanations within image captions even within the "summary reports for policy makers". In the second step, based on these observations, we designed a questionnaire to investigate how non-climate experts understand these visual representations of uncertainties, how visual uncertainty coding might hinder the perception of the "non-uncertain" data, and if alternatives for certain IPCC visualizations exist. Within the talk/poster, we will present first results from this questionnaire. Summarizing, we identified a clear trend towards complex images within the latest IPCC reports, with a tendency to incorporate as much as possible information into the visual representations, resulting in proprietary, non-standard graphic representations that are not necessarily easy to comprehend on one glimpse. We conclude that further translation is required to (visually) present the IPCC results to non-experts, providing tailored static and interactive visualization solutions for different user groups.

  18. Distinct regions of the hippocampus are associated with memory for different spatial locations.

    PubMed

    Jeye, Brittany M; MacEvoy, Sean P; Karanian, Jessica M; Slotnick, Scott D

    2018-05-15

    In the present functional magnetic resonance imaging (fMRI) study, we aimed to evaluate whether distinct regions of the hippocampus were associated with spatial memory for items presented in different locations of the visual field. In Experiment 1, during the study phase, participants viewed abstract shapes in the left or right visual field while maintaining central fixation. At test, old shapes were presented at fixation and participants classified each shape as previously in the "left" or "right" visual field followed by an "unsure"-"sure"-"very sure" confidence rating. Accurate spatial memory for shapes in the left visual field was isolated by contrasting accurate versus inaccurate spatial location responses. This contrast produced one hippocampal activation in which the interaction between item type and accuracy was significant. The analogous contrast for right visual field shapes did not produce activity in the hippocampus; however, the contrast of high confidence versus low confidence right-hits produced one hippocampal activation in which the interaction between item type and confidence was significant. In Experiment 2, the same paradigm was used but shapes were presented in each quadrant of the visual field during the study phase. Accurate memory for shapes in each quadrant, exclusively masked by accurate memory for shapes in the other quadrants, produced a distinct activation in the hippocampus. A multi-voxel pattern analysis (MVPA) of hippocampal activity revealed a significant correlation between behavioral spatial location accuracy and hippocampal MVPA accuracy across participants. The findings of both experiments indicate that distinct hippocampal regions are associated with memory for different visual field locations. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Audio-Visual Communications, A Tool for the Professional

    ERIC Educational Resources Information Center

    Journal of Environmental Health, 1976

    1976-01-01

    The manner in which the Cuyahoga County, Ohio Department of Environmental Health utilizes audio-visual presentations for communication with business and industry, professional public health agencies and the general public is presented. Subjects including food sanitation, radiation protection and safety are described. (BT)

  20. Reading Time Allocation Strategies and Working Memory Using Rapid Serial Visual Presentation

    ERIC Educational Resources Information Center

    Busler, Jessica N.; Lazarte, Alejandro A.

    2017-01-01

    Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce…

  1. Communicating forest management science and practices through visualized and animated media approaches to community presentations: An exploration and assessment

    Treesearch

    Donald E. Zimmerman; Carol Akerelrea; Jane Kapler Smith; Garrett J. O' Keefe

    2006-01-01

    Natural-resource managers have used a variety of computer-mediated presentation methods to communicate management practices to diverse publics. We explored the effects of visualizing and animating predictions from mathematical models in computerized presentations explaining forest succession (forest growth and change through time), fire behavior, and management options...

  2. Manipulations of attention dissociate fragile visual short-term memory from visual working memory.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Lamme, Victor A F

    2011-05-01

    People often rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). Traditionally, VSTM is thought to operate on either a short time-scale with high capacity - iconic memory - or a long time scale with small capacity - visual working memory. Recent research suggests that in addition, an intermediate stage of memory in between iconic memory and visual working memory exists. This intermediate stage has a large capacity and a lifetime of several seconds, but is easily overwritten by new stimulation. We therefore termed it fragile VSTM. In previous studies, fragile VSTM has been dissociated from iconic memory by the characteristics of the memory trace. In the present study, we dissociated fragile VSTM from visual working memory by showing a differentiation in their dependency on attention. A decrease in attention during presentation of the stimulus array greatly reduced the capacity of visual working memory, while this had only a small effect on the capacity of fragile VSTM. We conclude that fragile VSTM is a separate memory store from visual working memory. Thus, a tripartite division of VSTM appears to be in place, comprising iconic memory, fragile VSTM and visual working memory. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. The effects of lesions of the superior colliculus on locomotor orientation and the orienting reflex in the rat.

    PubMed

    Goodale, M A; Murison, R C

    1975-05-02

    The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.

  4. Does bimodal stimulus presentation increase ERP components usable in BCIs?

    NASA Astrophysics Data System (ADS)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.

    2012-08-01

    Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.

  5. Dyslexia and reasoning: the importance of visual processes.

    PubMed

    Bacon, Alison M; Handley, Simon J

    2010-08-01

    Recent research has suggested that individuals with dyslexia rely on explicit visuospatial representations for syllogistic reasoning while most non-dyslexics opt for an abstract verbal strategy. This paper investigates the role of visual processes in relational reasoning amongst dyslexic reasoners. Expt 1 presents written and verbal protocol evidence to suggest that reasoners with dyslexia generate detailed representations of relational properties and use these to make a visual comparison of objects. Non-dyslexics use a linear array of objects to make a simple transitive inference. Expt 2 examined evidence for the visual-impedance effect which suggests that visual information detracts from reasoning leading to longer latencies and reduced accuracy. While non-dyslexics showed the impedance effects predicted, dyslexics showed only reduced accuracy on problems designed specifically to elicit imagery. Expt 3 presented problems with less semantically and visually rich content. The non-dyslexic group again showed impedance effects, but dyslexics did not. Furthermore, in both studies, visual memory predicted reasoning accuracy for dyslexic participants, but not for non-dyslexics, particularly on problems with highly visual content. The findings are discussed in terms of the importance of visual and semantic processes in reasoning for individuals with dyslexia, and we argue that these processes play a compensatory role, offsetting phonological and verbal memory deficits.

  6. Integrating visualization and interaction research to improve scientific workflows.

    PubMed

    Keefe, Daniel F

    2010-01-01

    Scientific-visualization research is, nearly by necessity, interdisciplinary. In addition to their collaborators in application domains (for example, cell biology), researchers regularly build on close ties with disciplines related to visualization, such as graphics, human-computer interaction, and cognitive science. One of these ties is the connection between visualization and interaction research. This isn't a new direction for scientific visualization (see the "Early Connections" sidebar). However, momentum recently seems to be increasing toward integrating visualization research (for example, effective visual presentation of data) with interaction research (for example, innovative interactive techniques that facilitate manipulating and exploring data). We see evidence of this trend in several places, including the visualization literature and conferences.

  7. The Effect of the Visual Awareness Education Programme on the Visual Literacy of Children Aged 5-6

    ERIC Educational Resources Information Center

    Özkubat, S.; Ulutas, I.

    2018-01-01

    The aim of the present study was to investigate the effect of the "Visual Awareness Education Programme" developed to support the visual literacy skills of preschool children. The study group comprised 40 children (20 children in the experimental group and 20 children in the control group) attending preschool in the 2014-2015 school…

  8. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  9. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    ERIC Educational Resources Information Center

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  10. Visual Thinking Styles and Idea Generation Strategies Employed in Visual Brainstorming Sessions

    ERIC Educational Resources Information Center

    Börekçi, Naz A. G. Z.

    2017-01-01

    This paper presents the findings of visual analyses conducted on 369 sketch ideas generated in three 6-3-5 visual brainstorming sessions by a total of 25 participants, following the same design brief. The motivation for the study was an interest in the thematic content of the ideas generated as groups, and the individual representation styles used…

  11. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants.

    PubMed

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas

    2017-01-01

    Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.

  12. Visual cognition

    PubMed Central

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  13. Maintaining perceptual constancy while remaining vigilant: left hemisphere change blindness and right hemisphere vigilance.

    PubMed

    Vos, Leia; Whitman, Douglas

    2014-01-01

    A considerable literature suggests that the right hemisphere is dominant in vigilance for novel and survival-related stimuli, such as predators, across a wide range of species. In contrast to vigilance for change, change blindness is a failure to detect obvious changes in a visual scene when they are obscured by a disruption in scene presentation. We studied lateralised change detection using a series of scenes with salient changes in either the left or right visual fields. In Study 1 left visual field changes were detected more rapidly than right visual field changes, confirming a right hemisphere advantage for change detection. Increasing stimulus difficulty resulted in greater right visual field detections and left hemisphere detection was more likely when change occurred in the right visual field on a prior trial. In Study 2 an intervening distractor task disrupted the influence of prior trials. Again, faster detection speeds were observed for the left visual field changes with a shift to a right visual field advantage with increasing time-to-detection. This suggests that a right hemisphere role for vigilance, or catching attention, and a left hemisphere role for target evaluation, or maintaining attention, is present at the earliest stage of change detection.

  14. Why Color Matters: The Effect of Visual Cues on Learner's Interpretation of Dark Matter in a Cosmology Visualization

    NASA Astrophysics Data System (ADS)

    Buck, Z.

    2013-04-01

    As we turn more and more to high-end computing to understand the Universe at cosmological scales, visualizations of simulations will take on a vital role as perceptual and cognitive tools. In collaboration with the Adler Planetarium and University of California High-Performance AstroComputing Center (UC-HiPACC), I am interested in better understanding the use of visualizations to mediate astronomy learning across formal and informal settings. The aspect of my research that I present here uses quantitative methods to investigate how learners are relying on color to interpret dark matter in a cosmology visualization. The concept of dark matter is vital to our current understanding of the Universe, and yet we do not know how to effectively present dark matter visually to support learning. I employ an alternative treatment post-test only experimental design, in which members of an equivalent sample are randomly assigned to one of three treatment groups, followed by treatment and a post-test. Results indicate significant correlation (p < .05) between the color of dark matter in the visualization and survey responses, implying that aesthetic variations like color can have a profound effect on audience interpretation of a cosmology visualization.

  15. Ageing vision and falls: a review.

    PubMed

    Saftari, Liana Nafisa; Kwon, Oh-Sang

    2018-04-23

    Falls are the leading cause of accidental injury and death among older adults. One of three adults over the age of 65 years falls annually. As the size of elderly population increases, falls become a major concern for public health and there is a pressing need to understand the causes of falls thoroughly. While it is well documented that visual functions such as visual acuity, contrast sensitivity, and stereo acuity are correlated with fall risks, little attention has been paid to the relationship between falls and the ability of the visual system to perceive motion in the environment. The omission of visual motion perception in the literature is a critical gap because it is an essential function in maintaining balance. In the present article, we first review existing studies regarding visual risk factors for falls and the effect of ageing vision on falls. We then present a group of phenomena such as vection and sensory reweighting that provide information on how visual motion signals are used to maintain balance. We suggest that the current list of visual risk factors for falls should be elaborated by taking into account the relationship between visual motion perception and balance control.

  16. 7 Key Challenges for Visualization in Cyber Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Best, Daniel M.; Endert, Alexander; Kidwell, Dan

    In this paper we present seven challenges, informed by two user studies, to be considered when developing a visualization for cyber security purposes. Cyber security visualizations must go beyond isolated solutions and “pretty picture” visualizations in order to make impact to users. We provide an example prototype that addresses the challenges with a description of how they are met. Our aim is to assist in increasing utility and adoption rates for visualization capabilities in cyber security.

  17. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    PubMed

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  18. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  19. The effects of bilateral presentations on lateralized lexical decision.

    PubMed

    Fernandino, Leonardo; Iacoboni, Marco; Zaidel, Eran

    2007-06-01

    We investigated how lateralized lexical decision is affected by the presence of distractors in the visual hemifield contralateral to the target. The study had three goals: first, to determine how the presence of a distractor (either a word or a pseudoword) affects visual field differences in the processing of the target; second, to identify the stage of the process in which the distractor is affecting the decision about the target; and third, to determine whether the interaction between the lexicality of the target and the lexicality of the distractor ("lexical redundancy effect") is due to facilitation or inhibition of lexical processing. Unilateral and bilateral trials were presented in separate blocks. Target stimuli were always underlined. Regarding our first goal, we found that bilateral presentations (a) increased the effect of visual hemifield of presentation (right visual field advantage) for words by slowing down the processing of word targets presented to the left visual field, and (b) produced an interaction between visual hemifield of presentation (VF) and target lexicality (TLex), which implies the use of different strategies by the two hemispheres in lexical processing. For our second goal of determining the processing stage that is affected by the distractor, we introduced a third condition in which targets were always accompanied by "perceptual" distractors consisting of sequences of the letter "x" (e.g., xxxx). Performance on these trials indicated that most of the interaction occurs during lexical access (after basic perceptual analysis but before response programming). Finally, a comparison between performance patterns on the trials containing perceptual and lexical distractors indicated that the lexical redundancy effect is mainly due to inhibition of word processing by pseudoword distractors.

  20. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  1. Explanatory and illustrative visualization of special and general relativity.

    PubMed

    Weiskopf, Daniel; Borchers, Marc; Ertl, Thomas; Falk, Martin; Fechtig, Oliver; Frank, Regine; Grave, Frank; King, Andreas; Kraus, Ute; Müller, Thomas; Nollert, Hans-Peter; Rica Mendez, Isabel; Ruder, Hanns; Schafhitzel, Tobias; Schär, Sonja; Zahn, Corvin; Zatloukal, Michael

    2006-01-01

    This paper describes methods for explanatory and illustrative visualizations used to communicate aspects of Einstein's theories of special and general relativity, their geometric structure, and of the related fields of cosmology and astrophysics. Our illustrations target a general audience of laypersons interested in relativity. We discuss visualization strategies, motivated by physics education and the didactics of mathematics, and describe what kind of visualization methods have proven to be useful for different types of media, such as still images in popular science magazines, film contributions to TV shows, oral presentations, or interactive museum installations. Our primary approach is to adopt an egocentric point of view: The recipients of a visualization participate in a visually enriched thought experiment that allows them to experience or explore a relativistic scenario. In addition, we often combine egocentric visualizations with more abstract illustrations based on an outside view in order to provide several presentations of the same phenomenon. Although our visualization tools often build upon existing methods and implementations, the underlying techniques have been improved by several novel technical contributions like image-based special relativistic rendering on GPUs, special relativistic 4D ray tracing for accelerating scene objects, an extension of general relativistic ray tracing to manifolds described by multiple charts, GPU-based interactive visualization of gravitational light deflection, as well as planetary terrain rendering. The usefulness and effectiveness of our visualizations are demonstrated by reporting on experiences with, and feedback from, recipients of visualizations and collaborators.

  2. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4.

    PubMed

    Braun, J

    1994-02-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.

  3. The impact of interference on short-term memory for visual orientation.

    PubMed

    Rademaker, Rosanne L; Bloem, Ilona M; De Weerd, Peter; Sack, Alexander T

    2015-12-01

    Visual short-term memory serves as an efficient buffer for maintaining no longer directly accessible information. How robust are visual memories against interference? Memory for simple visual features has proven vulnerable to distractors containing conflicting information along the relevant stimulus dimension, leading to the idea that interacting feature-specific channels at an early stage of visual processing support memory for simple visual features. Here we showed that memory for a single randomly orientated grating was susceptible to interference from a to-be-ignored distractor grating presented midway through a 3-s delay period. Memory for the initially presented orientation became noisier when it differed from the distractor orientation, and response distributions were shifted toward the distractor orientation (by ∼3°). Interestingly, when the distractor was rendered task-relevant by making it a second memory target, memory for both retained orientations showed reduced reliability as a function of increased orientation differences between them. However, the degree to which responses to the first grating shifted toward the orientation of the task-relevant second grating was much reduced. Finally, using a dichoptic display, we demonstrated that these systematic biases caused by a consciously perceived distractor disappeared once the distractor was presented outside of participants' awareness. Together, our results show that visual short-term memory for orientation can be systematically biased by interfering information that is consciously perceived. (c) 2015 APA, all rights reserved).

  4. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  5. Prevalence and Causes of Visual Impairment and Blindness among Cocoa Farmers in Ghana.

    PubMed

    Boadi-Kusi, Samuel Bert; Hansraj, Rekha; Mashige, Khathutshelo Percy; Osafo-Kwaako, Alfred; Ilechie, Alex Azuka; Abokyi, Samuel

    2017-02-01

    To determine the prevalence and causes of visual impairment and blindness among cocoa farmers in Ghana in order to formulate early intervention strategies. A cross-sectional study using multistage random sampling from four cocoa growing districts in Ghana was conducted from November 2013 to April 2014. A total of 512 cocoa farmers aged 40 years and older were interviewed and examined. The brief interview questionnaire was administered to elicit information on the demographics and socioeconomic details of participants. The examination included assessment of visual acuity (VA), retinoscopy, subjective refraction, direct ophthalmoscopy, slit-lamp biomicroscopy and intraocular pressure (IOP). For quality assurance, a random sample of cocoa farmers were selected and re-examined independently. Moderate to severe visual impairment (VA <6/18 to 3/60 in the better-seeing eye) was present in 89 participants (17.4%) and 27 (5.3%) were blind (presenting VA <3/60 in the better eye) defined using presenting VA. The main causes of visual impairment were cataract (45, 38.8%), uncorrected refractive error (42, 36.2%), posterior segment disorders (15, 12.9%), and corneal opacity (11, 9.5%). The prevalence of visual impairment and blindness among cocoa farmers in Ghana is relatively high. The major causes of visual impairment and blindness are largely preventable or treatable, indicating the need for early eye care service interventions.

  6. Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.

    PubMed

    Kokubu, Masahiro; Ando, Soichi; Oda, Shingo

    2018-01-18

    The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. The relationship between level of autistic traits and local bias in the context of the McGurk effect

    PubMed Central

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2015-01-01

    The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705

  8. Mainstreaming the Visually Impaired Child.

    ERIC Educational Resources Information Center

    Calovini, Gloria, Ed.

    Intended for school administrators and regular classroom teachers, the document presents guidelines for working with visually impaired students being integrated into regular classes. Included is a description of the special education program in Illinois. Sections cover the following topics: identification and referral of visually impaired…

  9. Visual Literacy and Message Design

    ERIC Educational Resources Information Center

    Pettersson, Rune

    2009-01-01

    Many researchers from different disciplines have explained their views and interpretations and written about visual literacy from their various perspectives. Visual literacy may be applied in almost all areas such as advertising, anatomy, art, biology, business presentations, communication, education, engineering, etc. (Pettersson, 2002a). Despite…

  10. Characterizing Interaction with Visual Mathematical Representations

    ERIC Educational Resources Information Center

    Sedig, Kamran; Sumner, Mark

    2006-01-01

    This paper presents a characterization of computer-based interactions by which learners can explore and investigate visual mathematical representations (VMRs). VMRs (e.g., geometric structures, graphs, and diagrams) refer to graphical representations that visually encode properties and relationships of mathematical structures and concepts.…

  11. CLFs-based optimization control for a class of constrained visual servoing systems.

    PubMed

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGES

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  13. BactoGeNIE: a large-scale comparative genome visualization for big displays

    PubMed Central

    2015-01-01

    Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021

  14. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  15. Developmental dyslexia: exploring how much phonological and visual attention span disorders are linked to simultaneous auditory processing deficits.

    PubMed

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-07-01

    The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were assessed in the dyslexic children. We presented the dyslexic children with a phonological short-term memory task and a phonemic awareness task to quantify their phonological skills. Visual attention spans correlated positively with individual scores obtained on the dichotic listening task while phonological skills did not correlate with either dichotic scores or visual attention span measures. Moreover, all the dyslexic children with a dichotic listening deficit showed a simultaneous visual processing deficit, and a substantial number of dyslexic children exhibited phonological processing deficits whether or not they exhibited low dichotic listening scores. These findings suggest that processing simultaneous auditory stimuli may be impaired in dyslexic children regardless of phonological processing difficulties and be linked to similar problems in the visual modality.

  16. Conscious visual memory with minimal attention.

    PubMed

    Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F

    2017-02-01

    Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Unilateral pigmentary retinopathy--a review of literature and case presentation.

    PubMed

    Stamate, Alina-Cristina; Burcea, Marian; Zemba, Mihail

    2016-01-01

    To report a rare case of unilateral pigmentary retinopathy and describe the clinical and visual field characteristics of this particular case. We present the case of a 30-year-old male patient with a gradual loss of the visual field on his left eye (LE) for the past 10 years, with further gradual painless loss of his central visual field in the last year, and no similar symptoms in his right eye. His past medical and ocular history were unremarkable. No family history of acquired or inherited diseases was determined. Based on the history, clinical findings, and visual field examination, the diagnosis of unilateral pigmentary retinopathy was established. Visual acuity and visual field in the left eye (LE) were severely affected, while in the right eye (RE), they were completely normal. In this case, distinct features of pigmentary retinopathy were observed only in one eye, with the fellow eye being unaffected. The diagnosis requires a long follow-up period, visual field and electrophysiological testing to rule out a delayed onset of a bilateral form of pigmentary retinopathy.

  18. Visual Associative Learning in Restrained Honey Bees with Intact Antennae

    PubMed Central

    Dobrin, Scott E.; Fahrbach, Susan E.

    2012-01-01

    A restrained honey bee can be trained to extend its proboscis in response to the pairing of an odor with a sucrose reward, a form of olfactory associative learning referred to as the proboscis extension response (PER). Although the ability of flying honey bees to respond to visual cues is well-established, associative visual learning in restrained honey bees has been challenging to demonstrate. Those few groups that have documented vision-based PER have reported that removing the antennae prior to training is a prerequisite for learning. Here we report, for a simple visual learning task, the first successful performance by restrained honey bees with intact antennae. Honey bee foragers were trained on a differential visual association task by pairing the presentation of a blue light with a sucrose reward and leaving the presentation of a green light unrewarded. A negative correlation was found between age of foragers and their performance in the visual PER task. Using the adaptations to the traditional PER task outlined here, future studies can exploit pharmacological and physiological techniques to explore the neural circuit basis of visual learning in the honey bee. PMID:22701575

  19. Feast for the Eyes: An Introduction to Data Visualization.

    PubMed

    Brigham, Tara J

    2016-01-01

    Data visualization is defined as the use of data presented in a graphical or pictorial manner. While data visualization is not a new concept, the ease with which anyone can create a data-drive chart, image, or visual has encouraged its growth. The increase of free sources of data and need for user-created content on social media has also led to a rise in data visualization's popularity. This column will explore what data visualization is and how it is currently being used. It will also discuss the benefits, potential problems, and uses in libraries. A brief list of visualization guides is included.

  20. Causes of severe visual impairment and blindness in children attending schools for the visually handicapped in the Czech Republic.

    PubMed

    Kocur, I; Kuchynka, P; Rodný, S; Baráková, D; Schwartz, E C

    2001-10-01

    To describe the causes of severe visual impairment and blindness in children in schools for the visually handicapped in the Czech Republic in 1998. Pupils attending all 10 primary schools for the visually handicapped were examined. A modified WHO/PBL eye examination record for children with blindness and low vision was used. 229 children (146 males and 83 females) aged 6-15 years were included in the study: 47 children had severe visual impairment (20.5%) (visual acuity in their better eye less than 6/60), and 159 were blind (69.5%) (visual acuity in their better eye less than 3/60). Anatomically, the most affected parts of the eye were the retina (124, 54.2%), optic nerve (35, 15.3%), whole globe (25, 10.9%), lens (20, 8.7%), and uvea (12, 5.2%). Aetiologically (timing of insult leading to visual loss), the major cause of visual impairment was retinopathy of prematurity (ROP) (96, 41.9 %), followed by abnormalities of unknown timing of insult (97, 42.4%), and hereditary disease (21, 9.2%). In 90 children (40%), additional disabilities were present: mental disability (36, 16%), physical handicap (16, 7%), and/or a combination of both (19, 8%). It was estimated that 127 children (56%) suffer from visual impairment caused by potentially preventable and/or treatable conditions (for example, ROP, cataract, glaucoma). Establishing a study group for comprehensive evaluation of causes of visual handicap in children in the Czech Republic, as well as for detailed analysis of present practice of screening for ROP was recommended.

  1. Impact of cataract surgery in reducing visual impairment: a review.

    PubMed

    Khandekar, Rajiv; Sudhan, Anand; Jain, B K; Deshpande, Madan; Dole, Kuldeep; Shah, Mahul; Shah, Shreya

    2015-01-01

    The aim was to assess the impact of cataract surgeries in reducing visual disabilities and factors influencing it at three institutes of India. A retrospective chart review was performed in 2013. Data of 4 years were collected on gender, age, residence, presenting a vision in each eye, eye that underwent surgery, type of surgery and the amount the patient paid out of pocket for surgery. Visual impairment was categorized as; absolute blindness (no perception of light); blind (<3/60); severe visual impairment (SVI) (<6/60-3/60); moderate visual impairment (6/18-6/60) and; normal vision (≥6/12). Statistically analysis was performed to evaluate the association between visual disabilities and demographics or other possible barriers. The trend of visual impairment over time was also evaluated. We compared the data of 2011 to data available about cataract cases from institutions between 2002 and 2009. There were 108,238 cataract cases (50.6% were female) that underwent cataract surgery at the three institutions. In 2011, 71,615 (66.2%) cases underwent surgery. There were 45,336 (41.9%) with presenting vision < 3/60 and 75,393 (69.7%) had SVI in the fellow eye. Blindness at presentation for cataract surgery was associated to, male patients, Institution 3 (Dristi Netralaya, Dahod) surgeries after 2009, cataract surgeries without Intra ocular lens implant implantation, and patients paying <25 US $ for surgery. Predictors of SVI at time of cataract surgery were, male, Institution 3 (OM), phaco surgeries, those opting to pay 250 US $ for cataract surgeries. Patients with cataract seek eye care in late stages of visual disability. The goal of improving vision related quality of life for cataract patients during the early stages of visual impairment that is common in industrialized countries seems to be non-attainable in the rural India.

  2. Deployment of spatial attention to words in central and peripheral vision.

    PubMed

    Ducrot, Stéphanie; Grainger, Jonathan

    2007-05-01

    Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.

  3. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Intuitive presentation of clinical forensic data using anonymous and person-specific 3D reference manikins.

    PubMed

    Urschler, Martin; Höller, Johannes; Bornik, Alexander; Paul, Tobias; Giretzlehner, Michael; Bischof, Horst; Yen, Kathrin; Scheurer, Eva

    2014-08-01

    The increasing use of CT/MR devices in forensic analysis motivates the need to present forensic findings from different sources in an intuitive reference visualization, with the aim of combining 3D volumetric images along with digital photographs of external findings into a 3D computer graphics model. This model allows a comprehensive presentation of forensic findings in court and enables comparative evaluation studies correlating data sources. The goal of this work was to investigate different methods to generate anonymous and patient-specific 3D models which may be used as reference visualizations. The issue of registering 3D volumetric as well as 2D photographic data to such 3D models is addressed to provide an intuitive context for injury documentation from arbitrary modalities. We present an image processing and visualization work-flow, discuss the major parts of this work-flow, compare the different investigated reference models, and show a number of cases studies that underline the suitability of the proposed work-flow for presenting forensically relevant information in 3D visualizations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Light curve of the optical counterpart of 2A0311-227

    NASA Technical Reports Server (NTRS)

    Williams, G.; Hiltner, W. A.

    1980-01-01

    Visual and blue light curves are presented for the optical counterpart of the X-ray source 2A0311-227. This system, which is the newest member of the AM Herculis class of binaries, has an orbital period of 81 minutes which also modulates the visual light curve. A Fourier analysis of the data has revealed the presence of a 6-minute oscillation, at least in the visual light curve. Whether or not it is also present in the blue light curve is unclear.

  6. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  7. Software for visualization, analysis, and manipulation of laser scan images

    NASA Astrophysics Data System (ADS)

    Burnsides, Dennis B.

    1997-03-01

    The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.

  8. Impact of Learning Styles on Air Force Technical Training: Multiple and Linear Imagery in the Presentation of a Comparative Visual Location Task to Visual and Haptic Subjects. Interim Report for Period January 1977-January 1978.

    ERIC Educational Resources Information Center

    Ausburn, Floyd B.

    A U.S. Air Force study was designed to develop instruction based on the supplantation theory, in which tasks are performed (supplanted) for individuals who are unable to perform them due to their cognitive style. The study examined the effects of linear and multiple imagery in presenting a task requiring visual comparison and location to…

  9. [Epidemiological survey of visual impairment in Funing County, Jiangsu].

    PubMed

    Yang, M; Zhang, J F; Zhu, R R; Kang, L H; Qin, B; Guan, H J

    2017-07-11

    Objective: To investigate the prevalence of visual impairment and factors associated with visual impairment among people aged 50 years and above in Funing County, Jiangsu Province. Methods: Cross-sectional study. Random cluster sampling was used in selecting individuals aged ≥50 years in 30 clusters, and 5 947 individuals received visual acuity testing and eye examination. Stata 13.0 software was used to analyze the data. Multivariate logistic regression was used to detect possible factors of visual impairment such as age, gender and education. Statistical significance was defined as P< 0.05. Results: A total of 6 145 persons aged 50 years and above were enumerated, and 5 947 (96.8%) participants were examined. Based on the criteria of World Health Organization (WHO) visual impairment classification and presenting visual acuity, 138 persons were diagnosed as blindness, and 1 405 persons were diagnosed as low vision. The prevalence of blindness and low vision was 2.32% and 23.63%, respectively. And the prevalence of visual impairment was 25.95%. Based on the criteria of WHO visual impairment classification and best-corrected visual acuity, 92 persons were diagnosed as blindness, and 383 persons were diagnosed as low vision. The prevalence of blindness and low vision was 1.55% and 6.44%, respectively. And the prevalence of visual impairment was 7.99%. Concerning presenting visual acuity and best-corrected visual acuity, the prevalence of blindness and low vision was higher in old people, females and less educated persons. Cataract (46.63%) was the leading cause of blindness. Uncorrected refractive error (36.51%) was also a main cause of visual impairment. Conclusion: The prevalence of visual impairment is higher in old people, females and less educated persons in Funing County, Jiangsu Province. Cataract is still the leading cause of visual impairment. (Chin J Ophthalmol, 2017, 53: 502-508) .

  10. Interference with olfactory memory by visual and verbal tasks.

    PubMed

    Annett, J M; Cook, N M; Leslie, J C

    1995-06-01

    It has been claimed that olfactory memory is distinct from memory in other modalities. This study investigated the effectiveness of visual and verbal tasks in interfering with olfactory memory and included methodological changes from other recent studies. Subjects were allocated to one of four experimental conditions involving interference tasks [no interference task; visual task; verbal task; visual-plus-verbal task] and presented 15 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Recognition and recall performance both showed effects of interference of visual and verbal tasks but there was no effect for time of testing. While the results may be accommodated within a dual coding framework, further work is indicated to resolve theoretical issues relating to task complexity.

  11. Perception, Cognition, and Effectiveness of Visualizations with Applications in Science and Engineering

    NASA Astrophysics Data System (ADS)

    Borkin, Michelle A.

    Visualization is a powerful tool for data exploration and analysis. With data ever-increasing in quantity and becoming integrated into our daily lives, having effective visualizations is necessary. But how does one design an effective visualization? To answer this question we need to understand how humans perceive, process, and understand visualizations. Through visualization evaluation studies we can gain deeper insight into the basic perception and cognition theory of visualizations, both through domain-specific case studies as well as generalized laboratory experiments. This dissertation presents the results of four evaluation studies, each of which contributes new knowledge to the theory of perception and cognition of visualizations. The results of these studies include a deeper clearer understanding of how color, data representation dimensionality, spatial layout, and visual complexity affect a visualization's effectiveness, as well as how visualization types and visual attributes affect the memorability of a visualization. We first present the results of two domain-specific case study evaluations. The first study is in the field of biomedicine in which we developed a new heart disease diagnostic tool, and conducted a study to evaluate the effectiveness of 2D versus 3D data representations as well as color maps. In the second study, we developed a new visualization tool for filesystem provenance data with applications in computer science and the sciences more broadly. We additionally developed a new time-based hierarchical node grouping method. We then conducted a study to evaluate the effectiveness of the new tool with its radial layout versus the conventional node-link diagram, and the new node grouping method. Finally, we discuss the results of two generalized studies designed to understand what makes a visualization memorable. In the first evaluation we focused on visualization memorability and conducted an online study using Amazon's Mechanical Turk with hundreds of users and thousands of visualizations. For the second evaluation we designed an eye-tracking laboratory study to gain insight into precisely which elements of a visualization contribute to memorability as well as visualization recognition and recall.

  12. Visual imagery without visual perception: lessons from blind subjects

    NASA Astrophysics Data System (ADS)

    Bértolo, Helder

    2014-08-01

    The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review some of the works providing evidence for both claims. It seems that studying visual imagery in blind subjects can be used as a way of answering some of those questions, namely if it is possible to have visual imagery without visual perception. We present results from the work of our group using visual activation in dreams and its relation with EEG's spectral components, showing that congenitally blind have visual contents in their dreams and are able to draw them; furthermore their Visual Activation Index is negatively correlated with EEG alpha power. This study supports the hypothesis that it is possible to have visual imagery without visual experience.

  13. Interactive Visualization of Dependencies

    ERIC Educational Resources Information Center

    Moreno, Camilo Arango; Bischof, Walter F.; Hoover, H. James

    2012-01-01

    We present an interactive tool for browsing course requisites as a case study of dependency visualization. This tool uses multiple interactive visualizations to allow the user to explore the dependencies between courses. A usability study revealed that the proposed browser provides significant advantages over traditional methods, in terms of…

  14. Visualizing Clonal Evolution in Cancer.

    PubMed

    Krzywinski, Martin

    2016-06-02

    Rapid and inexpensive single-cell sequencing is driving new visualizations of cancer instability and evolution. Krzywinski discusses how to present clone evolution plots in order to visualize temporal, phylogenetic, and spatial aspects of a tumor in a single static image. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Analytical Judgment Using Visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik

    Scientists working in a particular domain often adhere to conventional data analysis and presentation methods and this leads to familiarity with these methods over time. But does high familiarity always lead to better analytical judgment? This question is especially relevant when visualizations are used in scientific tasks, as there can be discrepancies between visualization best practices and domain conventions. However, there is little empirical evidence of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their effect on scientific judgment. To address this gap and to study these factors, we focus on the climatemore » science domain, specifically on visualizations used for comparison of model performance. We present a comprehensive user study with 47 climate scientists where we explored the following factors: i) relationships between scientists’ familiarity, their perceived levels of com- fort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less

  16. Advanced pigment dispersion glaucoma secondary to phakic intraocular collamer lens implant.

    PubMed

    Ye, Clara; Patel, Cajal K; Momont, Anna C; Liu, Yao

    2018-06-01

    We report a case of pigment dispersion glaucoma secondary to uncomplicated phakic intraocular collamer lens (ICL) (Visian ICL™, Staar Inc., Monrovia, CA) implant that resulted in advanced visual field loss. A 50-year-old man presented for routine follow-up status post bilateral phakic intraocular collamer lens (ICL) placement 8 years earlier. He was incidentally found to have a decline in visual acuity from an anterior subcapsular cataract and elevated intraocular pressure (IOP) in the left eye. There were signs of pigment dispersion and no evidence of angle closure. Diffuse optic nerve thinning was consistent with advanced glaucomatous visual field defects. Pigment dispersion was also present in the patient's right eye, but without elevated IOP or visual field defects. The patient was treated with topical glaucoma medications and the phakic ICL in the left eye was removed concurrently with cataract surgery to prevent further visual field loss. Pigment dispersion glaucoma is a serious adverse outcome after phakic ICL implantation and regular post-operative monitoring may prevent advanced visual field loss.

  17. Delayed visual attention caused by high myopic refractive error.

    PubMed

    Winges, Kimberly M; Zarpellon, Ursula; Hou, Chuan; Good, William V

    2005-06-01

    Delayed visual maturation (DVM) is usually a retrospective diagnosis given to infants who are born with no or poor visually-directed behavior, despite normal acuity on objective testing, but who recover months later. This condition can be organized into several types based on associated neurodevelopmental or ocular findings, but the etiology of DVM is probably complex and involves multiple possible origins. Here we report two infants who presented with delayed visual maturation (attention). They were visually unresponsive at birth but were later found to have high myopic errors. Patient 1 had -4 D right eye, -5 D left eye. Patient 2 had -9 D o.u. Upon spectacle correction at 5 and 4 months, respectively, both infants immediately displayed visually-directed behavior, suggesting that a high refractive error was the cause of inattention in these patients. These findings could add to knowledge surrounding DVM and the diagnosis of apparently blind infants. Findings presented here also indicate the importance of prompt refractive error measurement in such cases.

  18. Manual control of yaw motion with combined visual and vestibular cues

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1977-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  19. Unconscious cues bias first saccades in a free-saccade task.

    PubMed

    Huang, Yu-Feng; Tan, Edlyn Gui Fang; Soon, Chun Siong; Hsieh, Po-Jang

    2014-10-01

    Visual-spatial attention can be biased towards salient visual information without visual awareness. It is unclear, however, whether such bias can further influence free-choices such as saccades in a free viewing task. In our experiment, we presented visual cues below awareness threshold immediately before people made free saccades. Our results showed that masked cues could influence the direction and latency of the first free saccade, suggesting that salient visual information can unconsciously influence free actions. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Visual-area coding technique (VACT): optical parallel implementation of fuzzy logic and its visualization with the digital-halftoning process

    NASA Astrophysics Data System (ADS)

    Konishi, Tsuyoshi; Tanida, Jun; Ichioka, Yoshiki

    1995-06-01

    A novel technique, the visual-area coding technique (VACT), for the optical implementation of fuzzy logic with the capability of visualization of the results is presented. This technique is based on the microfont method and is considered to be an instance of digitized analog optical computing. Huge amounts of data can be processed in fuzzy logic with the VACT. In addition, real-time visualization of the processed result can be accomplished.

  1. Visual loss in Takayasu Arteritis - Look Beyond the Eye.

    PubMed

    Peter, Jayanthi; Joseph, George; Mathew, Vivek; Peter, John Victor

    2014-08-01

    Patients with Takayasu arteritis often present with reduced vision related either to the disease per se or due to complications of therapy. We report a patient with Takayasu arteritis who developed acute onset bilateral visual loss 6wks following percutaneous revascularization of occluded aortic arch branches. No ocular cause for the visual loss was evident. The reason for visual loss in this patient was an extraocular cause. Ocular and extraocular causes of visual loss in Takayasu arteritis are discussed.

  2. Data Cube Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Gárate, Matías

    2017-06-01

    With the increasing data acquisition rates from observational and computational astrophysics, new tools are needed to study and visualize data. We present a methodology for rendering 3D data cubes using the open-source 3D software Blender. By importing processed observations and numerical simulations through the Voxel Data format, we are able use the Blender interface and Python API to create high-resolution animated visualizations. We review the methods for data import, animation, and camera movement, and present examples of this methodology. The 3D rendering of data cubes gives scientists the ability to create appealing displays that can be used for both scientific presentations as well as public outreach.

  3. Multidimensional scaling for evolutionary algorithms--visualization of the path through search space and solution space using Sammon mapping.

    PubMed

    Pohlheim, Hartmut

    2006-01-01

    Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).

  4. Visual Analytics and Storytelling through Video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Perrine, Kenneth A.; Mackey, Patrick S.

    2005-10-31

    This paper supplements a video clip submitted to the Video Track of IEEE Symposium on Information Visualization 2005. The original video submission applies a two-way storytelling approach to demonstrate the visual analytics capabilities of a new visualization technique. The paper presents our video production philosophy, describes the plot of the video, explains the rationale behind the plot, and finally, shares our production experiences with our readers.

  5. Scientific Visualization: The Modern Oscilloscope for "Seeing the Unseeable" (LBNL Summer Lecture Series)

    ScienceCinema

    Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division and Scientific Visualization Group

    2018-05-07

    Summer Lecture Series 2008: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in both experimental and computational sciences. Wes Bethel, who heads the Scientific Visualization Group in the Computational Research Division, presents an overview of visualization and computer graphics, current research challenges, and future directions for the field.

  6. Visible and Viable: The Role of Images in Instruction and Communication. Readings from the Annual Conference of the International Visual Literacy Association (18th, Commerce, Texas, 1987).

    ERIC Educational Resources Information Center

    Braden, Roberts A., Ed.; And Others

    Presentations at the International Visual Literacy Association conference are grouped under five topics, a prologue, and an epilogue: (1) Prologue--"Writing About Visual Literacy" (Roberts A. Braden); (2) Visible Language--four papers concerning picture books, the Macintosh and Laserwriter, the design of library signs, and visual literacy and…

  7. An Issue of Learning: The Effect of Visual Split Attention in Classes for Deaf and Hard of Hearing Students

    ERIC Educational Resources Information Center

    Mather, Susan M.; Clark, M. Diane

    2012-01-01

    One of the ongoing challenges teachers of students who are deaf or hard of hearing face is managing the visual split attention implicit in multimedia learning. When a teacher presents various types of visual information at the same time, visual learners have no choice but to divide their attention among those materials and the teacher and…

  8. Visual Literacy in the Digital Age: Selected Readings from the Annual Conference of the International Visual Literacy Association (25th, Rochester, New York, October 13-17, 1993).

    ERIC Educational Resources Information Center

    Beauchamp, Darrel G.; And Others

    This document contains selected papers from the 25th annual conference of the International Visual Literacy Association (IVLA). Topics addressed in the papers include the following: visual literacy; graphic information in research and education; evaluation criteria for instructional media; understanding symbols in business presentations;…

  9. In the Mind's Eye: Visual Thinkers, Gifted People with Dyslexia and Other Learning Difficulties, Computer Images and the Ironies of Creativity. Updated Edition.

    ERIC Educational Resources Information Center

    West, Thomas G.

    This book presents research on how some innovations in computer visualization are making work and education more favorable to visual thinking. The book exposes many popular myths about conventional intelligence through an examination of the role of visual-spatial strengths and verbal weaknesses in the lives of 11 gifted individuals, including…

  10. Sex differences in verbal and visual-spatial tasks under different hemispheric visual-field presentation conditions.

    PubMed

    Boyle, Gregory J; Neumann, David L; Furedy, John J; Westbury, H Rae

    2010-04-01

    This paper reports sex differences in cognitive task performance that emerged when 39 Australian university undergraduates (19 men, 20 women) were asked to solve verbal (lexical) and visual-spatial cognitive matching tasks which varied in difficulty and visual field of presentation. Sex significantly interacted with task type, task difficulty, laterality, and changes in performance across trials. The results revealed that the significant individual-differences' variable of sex does not always emerge as a significant main effect, but instead in terms of significant interactions with other variables manipulated experimentally. Our results show that sex differences must be taken into account when conducting experiments into human cognitive-task performance.

  11. Combined mirror visual and auditory feedback therapy for upper limb phantom pain: a case report

    PubMed Central

    2011-01-01

    Introduction Phantom limb sensation and phantom limb pain is a very common issue after amputations. In recent years there has been accumulating data implicating 'mirror visual feedback' or 'mirror therapy' as helpful in the treatment of phantom limb sensation and phantom limb pain. Case presentation We present the case of a 24-year-old Caucasian man, a left upper limb amputee, treated with mirror visual feedback combined with auditory feedback with improved pain relief. Conclusion This case may suggest that auditory feedback might enhance the effectiveness of mirror visual feedback and serve as a valuable addition to the complex multi-sensory processing of body perception in patients who are amputees. PMID:21272334

  12. A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning.

    PubMed

    Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark

    2015-10-01

    Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.

  13. Using Visualization in Cockpit Decision Support Systems

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.

    2005-01-01

    In order to safely operate their aircraft, pilots must make rapid decisions based on integrating and processing large amounts of heterogeneous information. Visual displays are often the most efficient method of presenting safety-critical data to pilots in real time. However, care must be taken to ensure the pilot is provided with the appropriate amount of information to make effective decisions and not become cognitively overloaded. The results of two usability studies of a prototype airflow hazard visualization cockpit decision support system are summarized. The studies demonstrate that such a system significantly improves the performance of helicopter pilots landing under turbulent conditions. Based on these results, design principles and implications for cockpit decision support systems using visualization are presented.

  14. Changes in connectivity of the posterior default network node during visual processing in mild cognitive impairment: staged decline between normal aging and Alzheimer's disease.

    PubMed

    Krajcovicova, Lenka; Barton, Marek; Elfmarkova-Nemcova, Nela; Mikl, Michal; Marecek, Radek; Rektorova, Irena

    2017-12-01

    Visual processing difficulties are often present in Alzheimer's disease (AD), even in its pre-dementia phase (i.e. in mild cognitive impairment, MCI). The default mode network (DMN) modulates the brain connectivity depending on the specific cognitive demand, including visual processes. The aim of the present study was to analyze specific changes in connectivity of the posterior DMN node (i.e. the posterior cingulate cortex and precuneus, PCC/P) associated with visual processing in 17 MCI patients and 15 AD patients as compared to 18 healthy controls (HC) using functional magnetic resonance imaging. We used psychophysiological interaction (PPI) analysis to detect specific alterations in PCC connectivity associated with visual processing while controlling for brain atrophy. In the HC group, we observed physiological changes in PCC connectivity in ventral visual stream areas and with PCC/P during the visual task, reflecting the successful involvement of these regions in visual processing. In the MCI group, the PCC connectivity changes were disturbed and remained significant only with the anterior precuneus. In between-group comparison, we observed significant PPI effects in the right superior temporal gyrus in both MCI and AD as compared to HC. This change in connectivity may reflect ineffective "compensatory" mechanism present in the early pre-dementia stages of AD or abnormal modulation of brain connectivity due to the disease pathology. With the disease progression, these changes become more evident but less efficient in terms of compensation. This approach can separate the MCI from HC with 77% sensitivity and 89% specificity.

  15. Bilingual Control: Sequential Memory in Language Switching

    ERIC Educational Resources Information Center

    Declerck, Mathieu; Philipp, Andrea M.; Koch, Iring

    2013-01-01

    To investigate bilingual language control, prior language switching studies presented visual objects, which had to be named in different languages, typically indicated by a visual cue. The present study examined language switching of predictable responses by introducing a novel sequence-based language switching paradigm. In 4 experiments,…

  16. Attention Gating in Short-Term Visual Memory.

    ERIC Educational Resources Information Center

    Reeves, Adam; Sperling, George

    1986-01-01

    An experiment is conducted showing that an attention shift to a stream of numerals presented in rapid serial visual presentation mode produces not a total loss, but a systematic distortion of order. An attention gating model (AGM) is developed from a more general attention model. (Author/LMO)

  17. Visual Pattern Analysis in Histopathology Images Using Bag of Features

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Caicedo, Juan C.; González, Fabio A.

    This paper presents a framework to analyse visual patterns in a collection of medical images in a two stage procedure. First, a set of representative visual patterns from the image collection is obtained by constructing a visual-word dictionary under a bag-of-features approach. Second, an analysis of the relationships between visual patterns and semantic concepts in the image collection is performed. The most important visual patterns for each semantic concept are identified using correlation analysis. A matrix visualization of the structure and organization of the image collection is generated using a cluster analysis. The experimental evaluation was conducted on a histopathology image collection and results showed clear relationships between visual patterns and semantic concepts, that in addition, are of easy interpretation and understanding.

  18. Cognitive and psychological science insights to improve climate change data visualization

    NASA Astrophysics Data System (ADS)

    Harold, Jordan; Lorenzoni, Irene; Shipley, Thomas F.; Coventry, Kenny R.

    2016-12-01

    Visualization of climate data plays an integral role in the communication of climate change findings to both expert and non-expert audiences. The cognitive and psychological sciences can provide valuable insights into how to improve visualization of climate data based on knowledge of how the human brain processes visual and linguistic information. We review four key research areas to demonstrate their potential to make data more accessible to diverse audiences: directing visual attention, visual complexity, making inferences from visuals, and the mapping between visuals and language. We present evidence-informed guidelines to help climate scientists increase the accessibility of graphics to non-experts, and illustrate how the guidelines can work in practice in the context of Intergovernmental Panel on Climate Change graphics.

  19. Research on the framework and key technologies of panoramic visualization for smart distribution network

    NASA Astrophysics Data System (ADS)

    Du, Jian; Sheng, Wanxing; Lin, Tao; Lv, Guangxian

    2018-05-01

    Nowadays, the smart distribution network has made tremendous progress, and the business visualization becomes even more significant and indispensable. Based on the summarization of traditional visualization technologies and demands of smart distribution network, a panoramic visualization application is proposed in this paper. The overall architecture, integrated architecture and service architecture of panoramic visualization application is firstly presented. Then, the architecture design and main functions of panoramic visualization system are elaborated in depth. In addition, the key technologies related to the application is discussed briefly. At last, two typical visualization scenarios in smart distribution network, which are risk warning and fault self-healing, proves that the panoramic visualization application is valuable for the operation and maintenance of the distribution network.

  20. Rapid Presentation of Emotional Expressions Reveals New Emotional Impairments in Tourette’s Syndrome

    PubMed Central

    Mermillod, Martial; Devaux, Damien; Derost, Philippe; Rieu, Isabelle; Chambres, Patrick; Auxiette, Catherine; Legrand, Guillaume; Galland, Fabienne; Dalens, Hélène; Coulangeon, Louise Marie; Broussolle, Emmanuel; Durif, Franck; Jalenques, Isabelle

    2013-01-01

    Objective: Based on a variety of empirical evidence obtained within the theoretical framework of embodiment theory, we considered it likely that motor disorders in Tourette’s syndrome (TS) would have emotional consequences for TS patients. However, previous research using emotional facial categorization tasks suggests that these consequences are limited to TS patients with obsessive-compulsive behaviors (OCB). Method: These studies used long stimulus presentations which allowed the participants to categorize the different emotional facial expressions (EFEs) on the basis of a perceptual analysis that might potentially hide a lack of emotional feeling for certain emotions. In order to reduce this perceptual bias, we used a rapid visual presentation procedure. Results: Using this new experimental method, we revealed different and surprising impairments on several EFEs in TS patients compared to matched healthy control participants. Moreover, a spatial frequency analysis of the visual signal processed by the patients suggests that these impairments may be located at a cortical level. Conclusion: The current study indicates that the rapid visual presentation paradigm makes it possible to identify various potential emotional disorders that were not revealed by the standard visual presentation procedures previously reported in the literature. Moreover, the spatial frequency analysis performed in our study suggests that emotional deficit in TS might lie at the level of temporal cortical areas dedicated to the processing of HSF visual information. PMID:23630481

  1. Visual context modulates potentiation of grasp types during semantic object categorization.

    PubMed

    Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J

    2014-06-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.

  2. Visual context modulates potentiation of grasp types during semantic object categorization

    PubMed Central

    Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.

    2013-01-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270

  3. Processing of threat-related information outside the focus of visual attention.

    PubMed

    Calvo, Manuel G; Castillo, M Dolores

    2005-05-01

    This study investigates whether threat-related words are especially likely to be perceived in unattended locations of the visual field. Threat-related, positive, and neutral words were presented at fixation as probes in a lexical decision task. The probe word was preceded by 2 simultaneous prime words (1 foveal, i.e., at fixation; 1 parafoveal, i.e., 2.2 deg. of visual angle from fixation), which were presented for 150 ms, one of which was either identical or unrelated to the probe. Results showed significant facilitation in lexical response times only for the probe threat words when primed parafoveally by an identical word presented in the right visual field. We conclude that threat-related words have privileged access to processing outside the focus of attention. This reveals a cognitive bias in the preferential, parallel processing of information that is important for adaptation.

  4. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  5. Object formation in visual working memory: Evidence from object-based attention.

    PubMed

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Selective attention modulates visual and haptic repetition priming: effects in aging and Alzheimer's disease.

    PubMed

    Ballesteros, Soledad; Reales, José M; Mayas, Julia; Heller, Morton A

    2008-08-01

    In two experiments, we examined the effect of selective attention at encoding on repetition priming in normal aging and Alzheimer's disease (AD) patients for objects presented visually (experiment 1) or haptically (experiment 2). We used a repetition priming paradigm combined with a selective attention procedure at encoding. Reliable priming was found for both young adults and healthy older participants for visually presented pictures (experiment 1) as well as for haptically presented objects (experiment 2). However, this was only found for attended and not for unattended stimuli. The results suggest that independently of the perceptual modality, repetition priming requires attention at encoding and that perceptual facilitation is maintained in normal aging. However, AD patients did not show priming for attended stimuli, or for unattended visual or haptic objects. These findings suggest an early deficit of selective attention in AD. Results are discussed from a cognitive neuroscience approach.

  7. Student Visual Communication of Evolution

    ERIC Educational Resources Information Center

    Oliveira, Alandeom W.; Cook, Kristin

    2017-01-01

    Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring…

  8. Complex Digital Visual Systems

    ERIC Educational Resources Information Center

    Sweeny, Robert W.

    2013-01-01

    This article identifies possibilities for data visualization as art educational research practice. The author presents an analysis of the relationship between works of art and digital visual culture, employing aspects of network analysis drawn from the work of Barabási, Newman, and Watts (2006) and Castells (1994). Describing complex network…

  9. Visual-Spatial Orienting in Autism.

    ERIC Educational Resources Information Center

    Wainwright, J. Ann; Bryson, Susan E.

    1996-01-01

    Visual-spatial orienting in 10 high-functioning adults with autism was examined. Compared to controls, subjects responded faster to central than to lateral stimuli, and showed a left visual field advantage for stimulus detection only when laterally presented. Abnormalities in attention shifting and coordination of attentional and motor systems are…

  10. Grammatical number agreement processing using the visual half-field paradigm: an event-related brain potential study.

    PubMed

    Kemmer, Laura; Coulson, Seana; Kutas, Marta

    2014-02-01

    Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere's processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun ("The grateful niece asked herself/*themselves…") or morphologically, e.g., subject/verb ("Industrial scientists develop/*develops…"). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Grammatical number agreement processing using the visual half-field paradigm: An event-related brain potential study

    PubMed Central

    Kemmer, Laura; Coulson, Seana; Kutas, Marta

    2014-01-01

    Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere’s processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun (“The grateful niece asked herself/*themselves…”) or morphologically, e.g., subject/verb (“Industrial scientists develop/*develops…”). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. PMID:24326084

  12. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  13. Medically unexplained visual loss in a specialist clinic: a retrospective case-control comparison.

    PubMed

    O'Leary, Éanna D; McNeillis, Benjamin; Aybek, Selma; Riordan-Eva, Paul; David, Anthony S

    2016-02-15

    To compare the clinical and demographic characteristics of adult patients with nonorganic or medically unexplained visual loss (MUVL) to those with other common conditions presenting to a neuro-ophthalmology clinic. Case-control design: a retrospective review of medical notes on a consecutive case series of 49 patients assessed at the King's College Hospital neuro-ophthalmology clinic with unexplained visual loss and matched with the next assessed patient identified from clinic records. Patients presented post-symptom onset with a mean clinical course of 30 months (SD=67 months) and standard clinical examination used to confirm diagnoses, alongside ancillary investigations if required. Seventy-two percent (n=36) of MUVL patients were female. In comparison with patients with organic visual disorders, MUVL cases presented with significantly higher rates of bilateral (cf. unilateral) visual impairment (41%, n=20), premorbid psychiatric (27%, n=13) as well as functional (24%, n=12) diagnoses and psychotropic medication usage (22%, n=11). Medically unexplained cases were significantly more likely to report preceding psychological stress (n=9; 18%). Medically unexplained visual impairment may be regarded as part of the spectrum of medically unexplained disorders seen in the general hospital setting. Research is needed to determine long-term outcomes and effective tailored interventions. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Perceptual uncertainty facilitates creative discovery

    NASA Astrophysics Data System (ADS)

    Tseng, Winger Sei-Wo

    2018-06-01

    In this study, unstructured and ambiguous figures used as visual stimuli were classified as having high, moderate, and low ambiguity and presented to participants. The Experiment was designed to explore how the perceptual ambiguity that is inherent within presented visual cues can affect novice and expert designers' visual discovery during design development. A total number of 42 participants, half of them were recruited from non-design departments as novices. The remaining were chosen from design companies regarded as experts. The participants were tasked with discovering a sub-shape from the presented sketch and using this shape as a cue to design a concept. To this end, two types of sub-shapes were defined: known feature sub-shapes and innovative feature sub-shapes (IFSs). The experimental results strongly evidence that with an increase in the ambiguity of the visual stimuli, expert designers produce more ideas and IFSs, whereas novice designers produce fewer. The capability of expert designers to exploit visual ambiguity is interesting, and its absence in novice designers suggests that this capability is likely a unique skill gained, at least in part, through professional practice. Our results can be applied in design learning and education to generalize the principles and strategies of visual discovery by expert designers during concept sketching in order to train novice designers in addressing design problems.

  15. Audio-visual interactions uniquely contribute to resolution of visual conflict in people possessing absolute pitch.

    PubMed

    Kim, Sujin; Blake, Randolph; Lee, Minyoung; Kim, Chai-Youn

    2017-01-01

    Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3-19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On "pitch-congruent" trials, participants heard an auditory melody that was congruent in pitch with the visual score, on "pitch-incongruent" trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on "melody-incongruent" trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score.

  16. Audio-visual interactions uniquely contribute to resolution of visual conflict in people possessing absolute pitch

    PubMed Central

    Kim, Sujin; Blake, Randolph; Lee, Minyoung; Kim, Chai-Youn

    2017-01-01

    Individuals possessing absolute pitch (AP) are able to identify a given musical tone or to reproduce it without reference to another tone. The present study sought to learn whether this exceptional auditory ability impacts visual perception under stimulus conditions that provoke visual competition in the form of binocular rivalry. Nineteen adult participants with 3–19 years of musical training were divided into two groups according to their performance on a task involving identification of the specific note associated with hearing a given musical pitch. During test trials lasting just over half a minute, participants dichoptically viewed a scrolling musical score presented to one eye and a drifting sinusoidal grating presented to the other eye; throughout the trial they pressed buttons to track the alternations in visual awareness produced by these dissimilar monocular stimuli. On “pitch-congruent” trials, participants heard an auditory melody that was congruent in pitch with the visual score, on “pitch-incongruent” trials they heard a transposed auditory melody that was congruent with the score in melody but not in pitch, and on “melody-incongruent” trials they heard an auditory melody completely different from the visual score. For both groups, the visual musical scores predominated over the gratings when the auditory melody was congruent compared to when it was incongruent. Moreover, the AP participants experienced greater predominance of the visual score when it was accompanied by the pitch-congruent melody compared to the same melody transposed in pitch; for non-AP musicians, pitch-congruent and pitch-incongruent trials yielded equivalent predominance. Analysis of individual durations of dominance revealed differential effects on dominance and suppression durations for AP and non-AP participants. These results reveal that AP is accompanied by a robust form of bisensory interaction between tonal frequencies and musical notation that boosts the salience of a visual score. PMID:28380058

  17. An invisible touch: Body-related multisensory conflicts modulate visual consciousness.

    PubMed

    Salomon, Roy; Galli, Giulia; Łukowska, Marta; Faivre, Nathan; Ruiz, Javier Bello; Blanke, Olaf

    2016-07-29

    The majority of scientific studies on consciousness have focused on vision, exploring the cognitive and neural mechanisms of conscious access to visual stimuli. In parallel, studies on bodily consciousness have revealed that bodily (i.e. tactile, proprioceptive, visceral, vestibular) signals are the basis for the sense of self. However, the role of bodily signals in the formation of visual consciousness is not well understood. Here we investigated how body-related visuo-tactile stimulation modulates conscious access to visual stimuli. We used a robotic platform to apply controlled tactile stimulation to the participants' back while they viewed a dot moving either in synchrony or asynchrony with the touch on their back. Critically, the dot was rendered invisible through continuous flash suppression. Manipulating the visual context by presenting the dot moving on either a body form, or a non-bodily object we show that: (i) conflict induced by synchronous visuo-tactile stimulation in a body context is associated with a delayed conscious access compared to asynchronous visuo-tactile stimulation, (ii) this effect occurs only in the context of a visual body form, and (iii) is not due to detection or response biases. The results indicate that body-related visuo-tactile conflicts impact visual consciousness by facilitating access of non-conflicting visual information to awareness, and that these are sensitive to the visual context in which they are presented, highlighting the interplay between bodily signals and visual experience. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Eye movement-invariant representations in the human visual system.

    PubMed

    Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L

    2017-01-01

    During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.

  19. Sociodemographic, lifestyle, and medical risk factors for visual impairment in an urban asian population: the singapore malay eye study.

    PubMed

    Chong, Elaine W; Lamoureux, Ecosse L; Jenkins, Mark A; Aung, Tin; Saw, Seang-Mei; Wong, Tien Y

    2009-12-01

    To describe the associations between sociodemographic, lifestyle, and medical risk factors and visual impairment in a Southeast Asian population. Population-based cross-sectional study of 3280 (78.7% response rate) Malay Singaporeans aged 40 to 80 years. Participants underwent a standardized interview, in which detailed sociodemographic histories were obtained, and clinical assessments for presenting and best-corrected visual acuity. Visual impairment (logMAR > 0.30) was classified as unilateral (1 eye impaired) or bilateral (both eyes impaired). Analyses used multivariate-adjusted multinomial logistic regression. Older age and lack of formal education was associated with increased odds of both unilateral and bilateral visual impairment based on presenting and best-corrected visual acuity. The odds doubled for each decade older, and lower education increased the odds 1.59- to 2.83-fold. Bilateral visual impairment was associated with being unemployed (odds ratio [OR], 1.84; 95% confidence interval [CI], 1.30-2.60), widowed status (OR, 1.51; 95% CI, 1.13-2.01), and higher systolic blood pressure (OR, 1.96; 95% CI, 1.44-2.66). Diabetes was associated with unilateral (OR, 1.47; 95% CI, 1.10-1.95) and bilateral (OR, 1.69; 95% CI, 1.23-2.32) visual impairment using best-corrected visual acuity. Older age, lower education, unemployment, being widowed, diabetes, and hypertension were independently associated with bilateral visual impairment. Public health interventions should be targeted to these at-risk populations.

  20. Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.

    PubMed

    Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T

    2012-01-02

    Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization

    PubMed Central

    Marai, G. Elisabeta

    2018-01-01

    Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550

  2. a Web-Based Platform for Visualizing Spatiotemporal Dynamics of Big Taxi Data

    NASA Astrophysics Data System (ADS)

    Xiong, H.; Chen, L.; Gui, Z.

    2017-09-01

    With more and more vehicles equipped with Global Positioning System (GPS), access to large-scale taxi trajectory data has become increasingly easy. Taxis are valuable sensors and information associated with taxi trajectory can provide unprecedented insight into many aspects of city life. But analysing these data presents many challenges. Visualization of taxi data is an efficient way to represent its distributions and structures and reveal hidden patterns in the data. However, Most of the existing visualization systems have some shortcomings. On the one hand, the passenger loading status and speed information cannot be expressed. On the other hand, mono-visualization form limits the information presentation. In view of these problems, this paper designs and implements a visualization system in which we use colour and shape to indicate passenger loading status and speed information and integrate various forms of taxi visualization. The main work as follows: 1. Pre-processing and storing the taxi data into MongoDB database. 2. Visualization of hotspots for taxi pickup points. Through DBSCAN clustering algorithm, we cluster the extracted taxi passenger's pickup locations to produce passenger hotspots. 3. Visualizing the dynamic of taxi moving trajectory using interactive animation. We use a thinning algorithm to reduce the amount of data and design a preloading strategyto load the data smoothly. Colour and shape are used to visualize the taxi trajectory data.

  3. A Survey of Educational Uses of Molecular Visualization Freeware†

    PubMed Central

    Craig, Paul A.; Michel, Lea Vacca; Bateman, Robert C.

    2014-01-01

    As biochemists, one of our most captivating teaching tools is the use of molecular visualization. It is a compelling medium that can be used to communicate structural information much more effectively with interactive animations than with static figures. We have conducted a survey to begin a systematic evaluation of the current classroom usage of molecular visualization. Participants (n = 116) were asked to complete 11 multiple choice and 3 open ended questions. To provide more depth to these results, interviews were conducted with 12 of the participants. Many common themes arose in the survey and the interviews: a shared passion for the use of molecular visualization in teaching, broad diversity in software preference, the lack of uniform standards for assessment, a desire for more quality resources, and the challenge of enabling students to incorporate visualization in their learning. The majority of respondents had used molecular visualization for more than 5 years and mentioned 32 different visualization tools used, with Jmol and PyMOL clearly standing out as the most frequently used programs at the present time. The most common uses of molecular visualization in teaching were lecture and lab illustrations, followed by exam questions, in-class or in-laboratory exercises, and student projects, which frequently included presentations. While a minority of instructors used a grading rubric/scoring matrix for assessment of student learning with molecular visualization, many expressed a desire for common use assessment tools. PMID:23649886

  4. Learning STEM through Integrative Visual Representations

    ERIC Educational Resources Information Center

    Virk, Satyugjit Singh

    2013-01-01

    Previous cognitive models of memory have not comprehensively taken into account the internal cognitive load of chunking isolated information and have emphasized the external cognitive load of visual presentation only. Under the Virk Long Term Working Memory Multimedia Model of cognitive load, drawing from the Cowan model, students presented with…

  5. Visual and Auditory Memory: Relationships to Reading Achievement.

    ERIC Educational Resources Information Center

    Bruning, Roger H.; And Others

    1978-01-01

    Good and poor readers' visual and auditory memory were tested. No group differences existed for single mode presentation in recognition frequency or latency. With multimodal presentation, good readers had faster latencies. Dual coding and self-terminating memory search hypotheses were supported. Implications for the reading process and reading…

  6. Biology Modules for the Visually Handicapped.

    ERIC Educational Resources Information Center

    Allan, Douglas M.

    The instructional materials presented and described in this document were prepared as part of a project to develop enrichment materials for visually impaired biology students. A wide range of biology topics are presented, including most subjects covered in a one-semester course for nonmajors. Typewritten handouts, duplicating the content of…

  7. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  8. Creating "Visual Legacies": Infographics as a Means of Interpreting and Sharing Research

    ERIC Educational Resources Information Center

    Thompson, Charee M.

    2015-01-01

    Guided by the principle "good data presentation is timeless," (Cressey, 2014, p.305), this unit project challenges students to engage an alternative means of sharing communication research and to realize the potential for their presentations to become "visual legacies" through the creation of infographics. Students encounter…

  9. Facilitative Orthographic Neighborhood Effects: The SERIOL Model Account

    ERIC Educational Resources Information Center

    Whitney, Carol; Lavidor, Michal

    2005-01-01

    A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the…

  10. 47 CFR 74.783 - Station identification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... originating local programming as defined by § 74.701(h) operating over 0.001 kw peak visual power (0.002 kw... visual presentation or a clearly understandable aural presentation of the translator station's call... identification procedures given in § 73.1201 when locally originating programming, as defined by § 74.701(h). The...

  11. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  12. From Quantification to Visualization: A Taxonomy of Uncertainty Visualization Approaches

    PubMed Central

    Potter, Kristin; Rosen, Paul; Johnson, Chris R.

    2014-01-01

    Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of disciplines. Communicating these uncertainties is a task often left to visualization without clear connection between the quantification and visualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the dimensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community. PMID:25663949

  13. Principle and engineering implementation of 3D visual representation and indexing of medical diagnostic records (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shi, Liehang; Sun, Jianyong; Yang, Yuanyuan; Ling, Tonghui; Wang, Mingqing; Zhang, Jianguo

    2017-03-01

    Purpose: Due to the generation of a large number of electronic imaging diagnostic records (IDR) year after year in a digital hospital, The IDR has become the main component of medical big data which brings huge values to healthcare services, professionals and administration. But a large volume of IDR presented in a hospital also brings new challenges to healthcare professionals and services as there may be too many IDRs for each patient so that it is difficult for a doctor to review all IDR of each patient in a limited appointed time slot. In this presentation, we presented an innovation method which uses an anatomical 3D structure object visually to represent and index historical medical status of each patient, which is called Visual Patient (VP) in this presentation, based on long term archived electronic IDR in a hospital, so that a doctor can quickly learn the historical medical status of the patient, quickly point and retrieve the IDR he or she interested in a limited appointed time slot. Method: The engineering implementation of VP was to build 3D Visual Representation and Index system called VP system (VPS) including components of natural language processing (NLP) for Chinese, Visual Index Creator (VIC), and 3D Visual Rendering Engine.There were three steps in this implementation: (1) an XML-based electronic anatomic structure of human body for each patient was created and used visually to index the all of abstract information of each IDR for each patient; (2)a number of specific designed IDR parsing processors were developed and used to extract various kinds of abstract information of IDRs retrieved from hospital information systems; (3) a 3D anatomic rendering object was introduced visually to represent and display the content of VIO for each patient. Results: The VPS was implemented in a simulated clinical environment including PACS/RIS to show VP instance to doctors. We setup two evaluation scenario in a hospital radiology department to evaluate whether radiologists accept the VPS and how the VP impact the radiologists' efficiency and accuracy in reviewing historic medical records of the patients. We got a statistical results showing that more than 70% participated radiologist would like to use the VPS in their radiological imaging services. In comparison testing of using VPS and RIS/PACS in reviewing historic medical records of the patients, we got a statistical result showing that the efficiency of using VPS was higher than that of using PACS/RIS. New Technologies and Results to be presented: This presentation presented an innovation method to use an anatomical 3D structure object, called VP, visually to represent and index historical medical records such as IDR of each patient and a doctor can quickly learn the historical medical status of the patient through VPS. The evaluation results showed that VPS has better performance than RIS-integrated PACS in efficiency of reviewing historic medical records of the patients. Conclusions: In this presentation, we presented an innovation method called VP to use an anatomical 3D structure object visually to represent and index historical IDR of each patient and briefed an engineering implementation to build a VPS to implement the major features and functions of VP. We setup two evaluation scenarios in a hospital radiology department to evaluate VPS and achieved evaluation results showed that VPS has better performance than RIS-integrated PACS in efficiency of reviewing historic medical records of the patients.

  14. Visualization of medical data based on EHR standards.

    PubMed

    Kopanitsa, G; Hildebrand, C; Stausberg, J; Englmeier, K H

    2013-01-01

    To organize an efficient interaction between a doctor and an EHR the data has to be presented in the most convenient way. Medical data presentation methods and models must be flexible in order to cover the needs of the users with different backgrounds and requirements. Most visualization methods are doctor oriented, however, there are indications that the involvement of patients can optimize healthcare. The research aims at specifying the state of the art of medical data visualization. The paper analyzes a number of projects and defines requirements for a generic ISO 13606 based data visualization method. In order to do so it starts with a systematic search for studies on EHR user interfaces. In order to identify best practices visualization methods were evaluated according to the following criteria: limits of application, customizability, re-usability. The visualization methods were compared by using specified criteria. The review showed that the analyzed projects can contribute knowledge to the development of a generic visualization method. However, none of them proposed a model that meets all the necessary criteria for a re-usable standard based visualization method. The shortcomings were mostly related to the structure of current medical concept specifications. The analysis showed that medical data visualization methods use hardcoded GUI, which gives little flexibility. So medical data visualization has to turn from a hardcoded user interface to generic methods. This requires a great effort because current standards are not suitable for organizing the management of visualization data. This contradiction between a generic method and a flexible and user-friendly data layout has to be overcome.

  15. Measuring Visual Displays’ Effect on Novice Performance in Door Gunnery

    DTIC Science & Technology

    2014-12-01

    training in a mixed reality simulation. Specifically, we examined the effect that different visual displays had on novice soldier performance; qualified...there was a main effect of visual display on performance. However, both visual treatment groups experienced the same degree of presence and simulator... The purpose of this paper is to present the results of our recent experimentation involving a novice population performing aerial door gunnery

  16. The Effect of Orthographic Depth on Letter String Processing: The Case of Visual Attention Span and Rapid Automatized Naming

    ERIC Educational Resources Information Center

    Antzaka, Alexia; Martin, Clara; Caffarra, Sendy; Schlöffel, Sophie; Carreiras, Manuel; Lallier, Marie

    2018-01-01

    The present study investigated whether orthographic depth can increase the bias towards multi-letter processing in two reading-related skills: visual attention span (VAS) and rapid automatized naming (RAN). VAS (i.e., the number of visual elements that can be processed at once in a multi-element array) was tested with a visual 1-back task and RAN…

  17. Interactive visualization of numerical simulation results: A tool for mission planning and data analysis

    NASA Technical Reports Server (NTRS)

    Berchem, J.; Raeder, J.; Walker, R. J.; Ashour-Abdalla, M.

    1995-01-01

    We report on the development of an interactive system for visualizing and analyzing numerical simulation results. This system is based on visualization modules which use the Application Visualization System (AVS) and the NCAR graphics packages. Examples from recent simulations are presented to illustrate how these modules can be used for displaying and manipulating simulation results to facilitate their comparison with phenomenological model results and observations.

  18. 2006 Progress Report on Acoustic and Visual Monitoring for Cetaceans along the Outer Washington Coast

    DTIC Science & Technology

    2007-08-01

    Although Northern Resident killer whales have been extensively studied within Puget Sound and coastal British Columbia, they have been visually sighted... whales . Time series of vocalizations detected in acoustic recordings are presented for killer whales , white-sided dolphins, Risso’s dolphins...Pinniped sightings during visual surveys since August 2004. Seasonal occurrence of humpback and gray whales from visual surveys. Killer whale

  19. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants

    PubMed Central

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas

    2017-01-01

    Background Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. Methods A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Results Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. Conclusions The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures. PMID:28046076

  20. Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories.

    PubMed

    Seemüller, Anna; Fiehler, Katja; Rösler, Frank

    2011-01-01

    The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    PubMed

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  2. Compensatory shifts in visual perception are associated with hallucinations in Lewy body disorders.

    PubMed

    Bowman, Alan Robert; Bruce, Vicki; Colbourn, Christopher J; Collerton, Daniel

    2017-01-01

    Visual hallucinations are a common, distressing, and disabling symptom of Lewy body and other diseases. Current models suggest that interactions in internal cognitive processes generate hallucinations. However, these neglect external factors. Pareidolic illusions are an experimental analogue of hallucinations. They are easily induced in Lewy body disease, have similar content to spontaneous hallucinations, and respond to cholinesterase inhibitors in the same way. We used a primed pareidolia task with hallucinating participants with Lewy body disorders (n = 16), non-hallucinating participants with Lewy body disorders (n = 19), and healthy controls (n = 20). Participants were presented with visual "noise" that sometimes contained degraded visual objects and were required to indicate what they saw. Some perceptions were cued in advance by a visual prime. Results showed that hallucinating participants were impaired in discerning visual signals from noise, with a relaxed criterion threshold for perception compared to both other groups. After the presentation of a visual prime, the criterion was comparable to the other groups. The results suggest that participants with hallucinations compensate for perceptual deficits by relaxing perceptual criteria, at a cost of seeing things that are not there, and that visual cues regularize perception. This latter finding may provide a mechanism for understanding the interaction between environments and hallucinations.

  3. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    PubMed

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  4. Human lateral geniculate nucleus and visual cortex respond to screen flicker.

    PubMed

    Krolak-Salmon, Pierre; Hénaff, Marie-Anne; Tallon-Baudry, Catherine; Yvert, Blaise; Guénot, Marc; Vighetto, Alain; Mauguière, François; Bertrand, Olivier

    2003-01-01

    The first electrophysiological study of the human lateral geniculate nucleus (LGN), optic radiation, striate, and extrastriate visual areas is presented in the context of presurgical evaluation of three epileptic patients (Patients 1, 2, and 3). Visual-evoked potentials to pattern reversal and face presentation were recorded with depth intracranial electrodes implanted stereotactically. For Patient 1, electrode anatomical registration, structural magnetic resonance imaging, and electrophysiological responses confirmed the location of two contacts in the geniculate body and one in the optic radiation. The first responses peaked approximately 40 milliseconds in the LGN in Patient 1 and 60 milliseconds in the V1/V2 complex in Patients 2 and 3. Moreover, steady state visual-evoked potentials evoked by the unperceived but commonly experienced video-screen flicker were recorded in the LGN, optic radiation, and V1/V2 visual areas. This study provides topographic and temporal propagation characteristics of steady state visual-evoked potentials along human visual pathways. We discuss the possible relationship between the oscillating signal recorded in subcortical and cortical areas and the electroencephalogram abnormalities observed in patients suffering from photosensitive epilepsy, particularly video-game epilepsy. The consequences of high temporal frequency visual stimuli delivered by ubiquitous video screens on epilepsy, headaches, and eyestrain must be considered.

  5. The roles of sensory function and cognitive load in age differences in inhibition: Evidence from the Stroop task.

    PubMed

    Peng, Huamao; Gao, Yue; Mao, Xiaofei

    2017-02-01

    To explore the roles of visual function and cognitive load in aging of inhibition, the present study adopted a 2 (visual perceptual stress: noise, nonnoise) × 2 (cognitive load: low, high) × 2 (age: young, old) mixed design. The Stroop task was adopted to measure inhibition. The task presentation was masked with Gaussian noise according to the visual function of each individual in order to match visual perceptual stress between age groups. The results indicated that age differences in the Stroop effect were influenced by visual function and cognitive load. When the cognitive load was low, older adults exhibited a larger Stroop effect than did younger adults in the nonnoise condition, and this age difference disappeared when the visual noise of the 2 age groups was matched. Conversely, in the high cognitive load condition, we observed significant age differences in the Stroop effect in both the nonnoise and noise conditions. The additional cognitive load made the age differences in the Stroop task reappear even when visual perceptual stress was equivalent. These results demonstrate that visual function plays an important role in the aging of inhibition and its role is moderated by cognitive load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    PubMed

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  7. The influence of visual speech information on the intelligibility of English consonants produced by non-native speakers.

    PubMed

    Kawase, Saya; Hannah, Beverly; Wang, Yue

    2014-09-01

    This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.

  8. Could visual neglect induce amblyopia?

    PubMed

    Bier, J C; Vokaer, M; Fery, P; Garbusinski, J; Van Campenhoudt, G; Blecic, S A; Bartholomé, E J

    2004-12-01

    Oculomotor nerve disease is a common cause of diplopia. When strabismus is present, absence of diplopia has to induce the research of either uncovering of visual fields or monocular suppression, amblyopia or blindness. We describe the case of a 41-year-old woman presenting with right oculomotor paresis and left object-centred visual neglect due to a right fronto-parietal haemorrhage expanding to the right peri-mesencephalic cisterna caused by the rupture of a right middle cerebral artery aneurysm. She never complained of diplopia despite binocular vision and progressive recovery of strabismus, excluding uncovering of visual fields. Since all other causes were excluded in this case, we hypothesise that the absence of diplopia was due to the object-centred visual neglect. Partial internal right oculomotor paresis causes an ocular deviation in abduction; the image being perceived deviated contralaterally to the left. Thus, in our case, the neglect of the left image is equivalent to a right monocular functional blindness. However, bell cancellation test clearly worsened when assessed in left monocular vision confirming that eye patching can worsen attentional visual neglect. In conclusion, our case argues for the possibility of a functional monocular blindness induced by visual neglect. We think that in presence of strabismus, absence of diplopia should induce the search for hemispatial visual neglect when supratentorial lesions are suspected.

  9. Neural representation of form-contingent color filling-in in the early visual cortex.

    PubMed

    Hong, Sang Wook; Tong, Frank

    2017-11-01

    Perceptual filling-in exemplifies the constructive nature of visual processing. Color, a prominent surface property of visual objects, can appear to spread to neighboring areas that lack any color. We investigated cortical responses to a color filling-in illusion that effectively dissociates perceived color from the retinal input (van Lier, Vergeer, & Anstis, 2009). Observers adapted to a star-shaped stimulus with alternating red- and cyan-colored points to elicit a complementary afterimage. By presenting an achromatic outline that enclosed one of the two afterimage colors, perceptual filling-in of that color was induced in the unadapted central region. Visual cortical activity was monitored with fMRI, and analyzed using multivariate pattern analysis. Activity patterns in early visual areas (V1-V4) reliably distinguished between the two color-induced filled-in conditions, but only higher extrastriate visual areas showed the predicted correspondence with color perception. Activity patterns allowed for reliable generalization between filled-in colors and physical presentations of perceptually matched colors in areas V3 and V4, but not in earlier visual areas. These findings suggest that the perception of filled-in surface color likely requires more extensive processing by extrastriate visual areas, in order for the neural representation of surface color to become aligned with perceptually matched real colors.

  10. SmartAdP: Visual Analytics of Large-scale Taxi Trajectories for Selecting Billboard Locations.

    PubMed

    Liu, Dongyu; Weng, Di; Li, Yuhong; Bao, Jie; Zheng, Yu; Qu, Huamin; Wu, Yingcai

    2017-01-01

    The problem of formulating solutions immediately and comparing them rapidly for billboard placements has plagued advertising planners for a long time, owing to the lack of efficient tools for in-depth analyses to make informed decisions. In this study, we attempt to employ visual analytics that combines the state-of-the-art mining and visualization techniques to tackle this problem using large-scale GPS trajectory data. In particular, we present SmartAdP, an interactive visual analytics system that deals with the two major challenges including finding good solutions in a huge solution space and comparing the solutions in a visual and intuitive manner. An interactive framework that integrates a novel visualization-driven data mining model enables advertising planners to effectively and efficiently formulate good candidate solutions. In addition, we propose a set of coupled visualizations: a solution view with metaphor-based glyphs to visualize the correlation between different solutions; a location view to display billboard locations in a compact manner; and a ranking view to present multi-typed rankings of the solutions. This system has been demonstrated using case studies with a real-world dataset and domain-expert interviews. Our approach can be adapted for other location selection problems such as selecting locations of retail stores or restaurants using trajectory data.

  11. Causes of severe visual impairment and blindness in children attending schools for the visually handicapped in the Czech Republic

    PubMed Central

    Kocur, I; Kuchynka, P; Rodny, S; Barakova, D; Schwartz, E

    2001-01-01

    AIMS—To describe the causes of severe visual impairment and blindness in children in schools for the visually handicapped in the Czech Republic in 1998.
METHODS—Pupils attending all 10 primary schools for the visually handicapped were examined. A modified WHO/PBL eye examination record for children with blindness and low vision was used.
RESULTS—229 children (146 males and 83 females) aged 6-15 years were included in the study: 47 children had severe visual impairment (20.5%) (visual acuity in their better eye less than 6/60), and 159 were blind (69.5%) (visual acuity in their better eye less than 3/60). Anatomically, the most affected parts of the eye were the retina (124, 54.2%), optic nerve (35, 15.3%), whole globe (25, 10.9%), lens (20, 8.7%), and uvea (12, 5.2%). Aetiologically (timing of insult leading to visual loss), the major cause of visual impairment was retinopathy of prematurity (ROP) (96, 41.9 %), followed by abnormalities of unknown timing of insult (97, 42.4%), and hereditary disease (21, 9.2%). In 90 children (40%), additional disabilities were present: mental disability (36, 16%), physical handicap (16, 7%), and/or a combination of both (19, 8%). It was estimated that 127 children (56%) suffer from visual impairment caused by potentially preventable and/or treatable conditions (for example, ROP, cataract, glaucoma).
CONCLUSIONS—Establishing a study group for comprehensive evaluation of causes of visual handicap in children in the Czech Republic, as well as for detailed analysis of present practice of screening for ROP was recommended.

 PMID:11567954

  12. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  13. Visualization and Enabling Science at PO.DAAC

    NASA Astrophysics Data System (ADS)

    Tauer, E.; To, C.

    2017-12-01

    Facilitating the identification of appropriate data for scientific inquiry is important for efficient progress, but mechanisms for that identification vary, as does the effectiveness of those mechanisms. Appropriately crafted visualizations provide the means to quickly assess science data and scientific features, but providing the right visualization to the right application can present challenges. Even greater is the challenge of generating and/or re-constituting visualizations on the fly, particularly for large datasets. One avenue to mitigate the challenge is to arrive at an optimized intermediate data format that is tuned for rapid processing without sacrificing the provenance trace back to the original source data. This presentation will discuss the results of trading several current approaches towards an intermediate data format, and suggest a list of key attributes that will facilitate rapid visualization, and in the process, facilitate the identification of the right data for a given application.

  14. Choice reaction time to visual motion during prolonged rotary motion in airline pilots

    NASA Technical Reports Server (NTRS)

    Stewart, J. D.; Clark, B.

    1975-01-01

    Thirteen airline pilots were studied to determine the effect of preceding rotary accelerations on the choice reaction time to the horizontal acceleration of a vertical line on a cathode-ray tube. On each trial, one of three levels of rotary and visual acceleration was presented with the rotary stimulus preceding the visual by one of seven periods. The two accelerations were always equal and were presented in the same or opposite directions. The reaction time was found to increase with increases in the time the rotary acceleration preceded the visual acceleration, and to decrease with increased levels of visual and rotary acceleration. The reaction time was found to be shorter when the accelerations were in the same direction than when they were in opposite directions. These results suggest that these findings are a special case of a general effect that the authors have termed 'gyrovisual modulation'.

  15. Prediction, events, and the advantage of Agents: The processing of semantic roles in visual narrative

    PubMed Central

    Cohn, Neil; Paczynski, Martin

    2013-01-01

    Agents consistently appear prior to Patients in sentences, manual signs, and drawings, and Agents are responded to faster when presented in visual depictions of events. We hypothesized that this “Agent advantage” reflects Agents’ role in event structure. We investigated this question by manipulating the depictions of Agents and Patients in preparatory actions in a wordless visual narrative. We found that Agents elicited a greater degree of predictions regarding upcoming events than Patients, that Agents are viewed longer than Patients, independent of serial order, and that visual depictions of actions are processed more quickly following the presentation of an Agent versus a Patient. Taken together these findings support the notion that Agents initiate the building of event representation. We suggest that Agent First orders facilitate the interpretation of events as they unfold and that the saliency of Agents within visual representations of events is driven by anticipation of upcoming events. PMID:23959023

  16. A component-based software environment for visualizing large macromolecular assemblies.

    PubMed

    Sanner, Michel F

    2005-03-01

    The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.

  17. Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levnajić, Zoran; Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, California 93106; Mezić, Igor

    We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone,more » and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.« less

  18. Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets.

    PubMed

    Levnajić, Zoran; Mezić, Igor

    2015-05-01

    We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone, and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.

  19. Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search

    PubMed Central

    Veeraraghavan, Harini; Miller, James V.

    2013-01-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207

  20. Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.

    PubMed

    Veeraraghavan, Harini; Miller, James V

    2014-04-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.

Top