Sample records for visual presentation rsvp

  1. Reading Time Allocation Strategies and Working Memory Using Rapid Serial Visual Presentation

    ERIC Educational Resources Information Center

    Busler, Jessica N.; Lazarte, Alejandro A.

    2017-01-01

    Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce…

  2. Visual attention distracter insertion for improved EEG rapid serial visual presentation (RSVP) target stimuli detection

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Martin, Kevin

    2017-05-01

    This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).

  3. Reading time allocation strategies and working memory using rapid serial visual presentation.

    PubMed

    Busler, Jessica N; Lazarte, Alejandro A

    2017-09-01

    Rapid serial visual presentation (RSVP) is a useful method for controlling the timing of text presentations and studying how readers' characteristics, such as working memory (WM) and reading strategies for time allocation, influence text recall. In the current study, a modified version of RSVP (Moving Window RSVP [MW-RSVP]) was used to induce longer pauses at the ends of clauses and ends of sentences when reading texts with multiple embedded clauses. We studied if WM relates to allocation of time at end of clauses or sentences in a self-paced reading task and in 2 MW-RSVP reading conditions (Constant MW-RSVP and Paused MW-RSVP) in which the reading rate was kept constant or pauses were induced. Higher WM span readers were more affected by the restriction of time allocation in the MW-RSVP conditions. In addition, the recall of both higher and lower WM-span readers benefited from the paused MW-RSVP presentation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Effects of Orthographic and Phonological Word Length on Memory for Lists Shown at RSVP and STM Rates

    ERIC Educational Resources Information Center

    Coltheart, Veronika; Mondy, Stephen; Dux, Paul E.; Stephenson, Lisa

    2004-01-01

    This article reports 3 experiments in which effects of orthographic and phonological word length on memory were examined for short lists shown at rapid serial visual presentation (RSVP) and short-term memory (STM) rates. Only visual-orthographic length reduced RSVP serial recall, whereas both orthographic and phonological length lowered recall for…

  5. The Attention Cascade Model and Attentional Blink

    ERIC Educational Resources Information Center

    Shih, Shui-I

    2008-01-01

    An attention cascade model is proposed to account for attentional blinks in rapid serial visual presentation (RSVP) of stimuli. Data were collected using single characters in a single RSVP stream at 10 Hz [Shih, S., & Reeves, A. (2007). "Attentional capture in rapid serial visual presentation." "Spatial Vision", 20(4), 301-315], and single words,…

  6. Gaze-independent BCI-spelling using rapid serial visual presentation (RSVP).

    PubMed

    Acqualagna, Laura; Blankertz, Benjamin

    2013-05-01

    A Brain Computer Interface (BCI) speller is a communication device, which can be used by patients suffering from neurodegenerative diseases to select symbols in a computer application. For patients unable to overtly fixate the target symbol, it is crucial to develop a speller independent of gaze shifts. In the present online study, we investigated rapid serial visual presentation (RSVP) as a paradigm for mental typewriting. We investigated the RSVP speller in three conditions, regarding the Stimulus Onset Asynchrony (SOA) and the use of color features. A vocabulary of 30 symbols was presented one-by-one in a pseudo random sequence at the same location of display. All twelve participants were able to successfully operate the RSVP speller. The results show a mean online spelling rate of 1.43 symb/min and a mean symbol selection accuracy of 94.8% in the best condition. We conclude that the RSVP is a promising paradigm for BCI spelling and its performance is competitive with the fastest gaze-independent spellers in literature. The RSVP speller does not require gaze shifts towards different target locations and can be operated by non-spatial visual attention, therefore it can be considered as a valid paradigm in applications with patients for impaired oculo-motor control. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Detecting and Remembering Simultaneous Pictures in a Rapid Serial Visual Presentation

    ERIC Educational Resources Information Center

    Potter, Mary C.; Fox, Laura F.

    2009-01-01

    Viewers can easily spot a target picture in a rapid serial visual presentation (RSVP), but can they do so if more than 1 picture is presented simultaneously? Up to 4 pictures were presented on each RSVP frame, for 240 to 720 ms/frame. In a detection task, the target was verbally specified before each trial (e.g., "man with violin"); in a…

  8. Using RSVP for analyzing state and previous activities for the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Hartman, Frank; Maxwell, Scott; Wright, John; Yen, Jeng

    2004-01-01

    Current developments in immersive environments for mission planning include several tools which make up a system for performing and rehearsing missions. This system, known as the Rover Sequencing and Visualization Program (RSVP), includes tools for planning long range sorties for highly autonomous rovers, tools for planning operations with robotic arms, and advanced tools for visualizing telemetry from remote spacecraft and landers. One of the keys to successful planning of rover activities is knowing what the rover has accomplished to date and understanding the current rover state. RSVP builds on the lessons learned and the heritage of the Mars Pathfinder mission This paper will discuss the tools and methodologies present in the RSVP suite for examining rover state, reviewing previous activities, visually comparing telemetered results to rehearsed results, and reviewing science and engineering imagery. In addition we will present how this tool suite was used on the Mars Exploration Rovers (MER) project to explore the surface of Mars.

  9. The cost of space independence in P300-BCI spellers.

    PubMed

    Chennu, Srivas; Alsufyani, Abdulmajeed; Filetti, Marco; Owen, Adrian M; Bowman, Howard

    2013-07-29

    Though non-invasive EEG-based Brain Computer Interfaces (BCI) have been researched extensively over the last two decades, most designs require control of spatial attention and/or gaze on the part of the user. In healthy adults, we compared the offline performance of a space-independent P300-based BCI for spelling words using Rapid Serial Visual Presentation (RSVP), to the well-known space-dependent Matrix P300 speller. EEG classifiability with the RSVP speller was as good as with the Matrix speller. While the Matrix speller's performance was significantly reliant on early, gaze-dependent Visual Evoked Potentials (VEPs), the RSVP speller depended only on the space-independent P300b. However, there was a cost to true spatial independence: the RSVP speller was less efficient in terms of spelling speed. The advantage of space independence in the RSVP speller was concomitant with a marked reduction in spelling efficiency. Nevertheless, with key improvements to the RSVP design, truly space-independent BCIs could approach efficiencies on par with the Matrix speller. With sufficiently high letter spelling rates fused with predictive language modelling, they would be viable for potential applications with patients unable to direct overt visual gaze or covert attentional focus.

  10. Sublexical Processing in Visual Recognition of Chinese Characters: Evidence from Repetition Blindness for Subcharacter Components

    ERIC Educational Resources Information Center

    Yeh, Su-Ling; Li, Jing-Ling

    2004-01-01

    Repetition blindness (RB) refers to the failure to detect the second occurrence of a repeated item in rapid serial visual presentation (RSVP). In two experiments using RSVP, the ability to report two critical characters was found to be impaired when these two characters were identical (Experiment 1) or similar by sharing one repeated component…

  11. Using RSVP for analyzing state and previous activities for the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Wright, John; Hartman, Frank; Maxwell, Scott; Yen, Jeng

    2004-01-01

    This paper will discuss the tools and methodologies present in the RSVP suite for examining rover state, reviewing previous activities, visually comparing telemetered results to rehearse results, and reveiwing sciene and engineering imagery.

  12. Optimized static and video EEG rapid serial visual presentation (RSVP) paradigm based on motion surprise computation

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan

    2017-05-01

    In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.

  13. Learning to Read Vertical Text in Peripheral Vision

    PubMed Central

    Subramanian, Ahalya; Legge, Gordon E.; Wagoner, Gunther Harrison; Yu, Deyue

    2014-01-01

    Purpose English–language text is almost always written horizontally. Text can be formatted to run vertically, but this is seldom used. Several studies have found that horizontal text can be read faster than vertical text in the central visual field. No studies have investigated the peripheral visual field. Studies have also concluded that training can improve reading speed in the peripheral visual field for horizontal text. We aimed to establish whether the horizontal vertical differences are maintained and if training can improve vertical reading in the peripheral visual field. Methods Eight normally sighted young adults participated in the first study. Rapid Serial Visual Presentation (RSVP) reading speed was measured for horizontal and vertical text in the central visual field and at 10° eccentricity in the upper or lower (horizontal text), and right or left (vertical text) visual fields. Twenty-one normally sighted young adults split equally between 2 training and 1 control group participated in the second study. Training consisted of RSVP reading either using vertical text in the left visual field or horizontal text in the inferior visual field. Subjects trained daily over 4 days. Pre and post horizontal and vertical RSVP reading speeds were carried out for all groups. For the training groups these measurements were repeated 1 week and 1 month post training. Results Prior to training, RSVP reading speeds were faster for horizontal text in the central and peripheral visual fields when compared to vertical text. Training vertical reading improved vertical reading speeds by an average factor of 2.8. There was partial transfer of training to the opposite (right) hemifield. The training effects were retained for up to a month. Conclusions RSVP training can improve RSVP vertical text reading in peripheral vision. These findings may have implications for patients with macular degeneration or hemianopic field loss. PMID:25062130

  14. Update on Rover Sequencing and Visualization Program

    NASA Technical Reports Server (NTRS)

    Cooper, Brian; Hartman, Frank; Maxwell, Scott; Yen, Jeng; Wright, John; Balacuit, Carlos

    2005-01-01

    The Rover Sequencing and Visualization Program (RSVP) has been updated. RSVP was reported in Rover Sequencing and Visualization Program (NPO-30845), NASA Tech Briefs, Vol. 29, No. 4 (April 2005), page 38. To recapitulate: The Rover Sequencing and Visualization Program (RSVP) is the software tool to be used in the Mars Exploration Rover (MER) mission for planning rover operations and generating command sequences for accomplishing those operations. RSVP combines three-dimensional (3D) visualization for immersive exploration of the operations area, stereoscopic image display for high-resolution examination of the downlinked imagery, and a sophisticated command-sequence editing tool for analysis and completion of the sequences. RSVP is linked with actual flight code modules for operations rehearsal to provide feedback on the expected behavior of the rover prior to committing to a particular sequence. Playback tools allow for review of both rehearsed rover behavior and downlinked results of actual rover operations. These can be displayed simultaneously for comparison of rehearsed and actual activities for verification. The primary inputs to RSVP are downlink data products from the Operations Storage Server (OSS) and activity plans generated by the science team. The activity plans are high-level goals for the next day s activities. The downlink data products include imagery, terrain models, and telemetered engineering data on rover activities and state. The Rover Sequence Editor (RoSE) component of RSVP performs activity expansion to command sequences, command creation and editing with setting of command parameters, and viewing and management of rover resources. The HyperDrive component of RSVP performs 2D and 3D visualization of the rover s environment, graphical and animated review of rover predicted and telemetered state, and creation and editing of command sequences related to mobility and Instrument Deployment Device (robotic arm) operations. Additionally, RoSE and HyperDrive together evaluate command sequences for potential violations of flight and safety rules. The products of RSVP include command sequences for uplink that are stored in the Distributed Object Manager (DOM) and predicted rover state histories stored in the OSS for comparison and validation of downlinked telemetry. The majority of components comprising RSVP utilize the MER command and activity dictionaries to automatically customize the system for MER activities.

  15. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    PubMed

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  16. Robot Sequencing and Visualization Program (RSVP)

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Maxwell,Scott A.; Hartman, Frank R.; Wright, John R.; Yen, Jeng; Toole, Nicholas T.; Gorjian, Zareh; Morrison, Jack C

    2013-01-01

    The Robot Sequencing and Visualization Program (RSVP) is being used in the Mars Science Laboratory (MSL) mission for downlink data visualization and command sequence generation. RSVP reads and writes downlink data products from the operations data server (ODS) and writes uplink data products to the ODS. The primary users of RSVP are members of the Rover Planner team (part of the Integrated Planning and Execution Team (IPE)), who use it to perform traversability/articulation analyses, take activity plan input from the Science and Mission Planning teams, and create a set of rover sequences to be sent to the rover every sol. The primary inputs to RSVP are downlink data products and activity plans in the ODS database. The primary outputs are command sequences to be placed in the ODS for further processing prior to uplink to each rover. RSVP is composed of two main subsystems. The first, called the Robot Sequence Editor (RoSE), understands the MSL activity and command dictionaries and takes care of converting incoming activity level inputs into command sequences. The Rover Planners use the RoSE component of RSVP to put together command sequences and to view and manage command level resources like time, power, temperature, etc. (via a transparent realtime connection to SEQGEN). The second component of RSVP is called HyperDrive, a set of high-fidelity computer graphics displays of the Martian surface in 3D and in stereo. The Rover Planners can explore the environment around the rover, create commands related to motion of all kinds, and see the simulated result of those commands via its underlying tight coupling with flight navigation, motor, and arm software. This software is the evolutionary replacement for the Rover Sequencing and Visualization software used to create command sequences (and visualize the Martian surface) for the Mars Exploration Rover mission.

  17. Rover Sequencing and Visualization Program

    NASA Technical Reports Server (NTRS)

    Cooper, Brian; Hartman, Frank; Maxwell, Scott; Yen, Jeng; Wright, John; Balacuit, Carlos

    2005-01-01

    The Rover Sequencing and Visualization Program (RSVP) is the software tool for use in the Mars Exploration Rover (MER) mission for planning rover operations and generating command sequences for accomplishing those operations. RSVP combines three-dimensional (3D) visualization for immersive exploration of the operations area, stereoscopic image display for high-resolution examination of the downlinked imagery, and a sophisticated command-sequence editing tool for analysis and completion of the sequences. RSVP is linked with actual flight-code modules for operations rehearsal to provide feedback on the expected behavior of the rover prior to committing to a particular sequence. Playback tools allow for review of both rehearsed rover behavior and downlinked results of actual rover operations. These can be displayed simultaneously for comparison of rehearsed and actual activities for verification. The primary inputs to RSVP are downlink data products from the Operations Storage Server (OSS) and activity plans generated by the science team. The activity plans are high-level goals for the next day s activities. The downlink data products include imagery, terrain models, and telemetered engineering data on rover activities and state. The Rover Sequence Editor (RoSE) component of RSVP performs activity expansion to command sequences, command creation and editing with setting of command parameters, and viewing and management of rover resources. The HyperDrive component of RSVP performs 2D and 3D visualization of the rover s environment, graphical and animated review of rover-predicted and telemetered state, and creation and editing of command sequences related to mobility and Instrument Deployment Device (IDD) operations. Additionally, RoSE and HyperDrive together evaluate command sequences for potential violations of flight and safety rules. The products of RSVP include command sequences for uplink that are stored in the Distributed Object Manager (DOM) and predicted rover state histories stored in the OSS for comparison and validation of downlinked telemetry. The majority of components comprising RSVP utilize the MER command and activity dictionaries to automatically customize the system for MER activities. Thus, RSVP, being highly data driven, may be tailored to other missions with minimal effort. In addition, RSVP uses a distributed, message-passing architecture to allow multitasking, and collaborative visualization and sequence development by scattered team members.

  18. Modern Speed-Reading Apps Do Not Foster Reading Comprehension.

    PubMed

    Acklin, Dina; Papesh, Megan H

    2017-01-01

    New computer apps are gaining popularity by suggesting that reading speeds can be drastically increased when eye movements that normally occur during reading are eliminated. This is done using rapid serial visual presentation (RSVP), where words are presented 1 at a time, thus preventing natural eye movements such as saccades, fixations, and regressions from occurring. Al- though the companies producing these apps suggest that RSVP reading does not yield comprehension deficits, research investigating the role of eye movements in reading documents shows the necessity of natural eye movements for accurate comprehension. The current study explored variables that may affect reading comprehension during RSVP reading, including text difficulty (6th grade and 12th grade), text presentation speed (static, 700 wpm, and 1,000 wpm), and working memory capacity (WMC). Consistent with recent work showing a tenuous relationship between comprehension and WMC, participants' WMC did not predict comprehension scores. Instead, comprehension was most affected by reading speed: Static text was associated with superior performance, relative to either RSVP reading condition. Furthermore, slower RSVP speeds yielded better verbatim comprehension, and faster speeds benefited inferential comprehension.

  19. Briefly Cuing Memories Leads to Suppression of Their Neural Representations

    PubMed Central

    Norman, Kenneth A.

    2014-01-01

    Previous studies have linked partial memory activation with impaired subsequent memory retrieval (e.g., Detre et al., 2013) but have not provided an account of this phenomenon at the level of memory representations: How does partial activation change the neural pattern subsequently elicited when the memory is cued? To address this question, we conducted a functional magnetic resonance imaging (fMRI) experiment in which participants studied word-scene paired associates. Later, we weakly reactivated some memories by briefly presenting the cue word during a rapid serial visual presentation (RSVP) task; other memories were more strongly reactivated or not reactivated at all. We tested participants' memory for the paired associates before and after RSVP. Cues that were briefly presented during RSVP triggered reduced levels of scene activity on the post-RSVP memory test, relative to the other conditions. We used pattern similarity analysis to assess how representations changed as a function of the RSVP manipulation. For briefly cued pairs, we found that neural patterns elicited by the same cue on the pre- and post-RSVP tests (preA–postA; preB–postB) were less similar than neural patterns elicited by different cues (preA–postB; preB–postA). These similarity reductions were predicted by neural measures of memory activation during RSVP. Through simulation, we show that our pattern similarity results are consistent with a model in which partial memory activation triggers selective weakening of the strongest parts of the memory. PMID:24899722

  20. Repetition blindness and illusory conjunctions: errors in binding visual types with visual tokens.

    PubMed

    Kanwisher, N

    1991-05-01

    Repetition blindness (Kanwisher, 1986, 1987) has been defined as the failure to detect or recall repetitions of words presented in rapid serial visual presentation (RSVP). The experiments presented here suggest that repetition blindness (RB) is a more general visual phenomenon, and examine its relationship to feature integration theory (Treisman & Gelade, 1980). Experiment 1 shows RB for letters distributed through space, time, or both. Experiment 2 demonstrates RB for repeated colors in RSVP lists. In Experiments 3 and 4, RB was found for repeated letters and colors in spatial arrays. Experiment 5 provides evidence that the mental representations of discrete objects (called "visual tokens" here) that are necessary to detect visual repetitions (Kanwisher, 1987) are the same as the "object files" (Kahneman & Treisman, 1984) in which visual features are conjoined. In Experiment 6, repetition blindness for the second occurrence of a repeated letter resulted only when the first occurrence was attended to. The overall results suggest that a general dissociation between types and tokens in visual information processing can account for both repetition blindness and illusory conjunctions.

  1. Stimulus and Response-Locked P3 Activity in a Dynamic Rapid Serial Visual Presentation (RSVP) Task

    DTIC Science & Technology

    2013-01-01

    Perception and Psychophysics 1973, 14, 265–272. Touryan, J.; Gibson, L.; Horne, J. H.; Weber, P. Real-Time Classification of Neural Signals ...execution. 15. SUBJECT TERMS P300, RSVP, EEG, target recognition, reaction time, ERP 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT...applications and as an input signal in many brain computer interactive technologies (BCITs) for both patients and healthy individuals. ERPs are extracted

  2. Age-Related Changes in Temporal Allocation of Visual Attention: Evidence from the Rapid Serial Visual Presentation (RSVP) Paradigm

    ERIC Educational Resources Information Center

    Berger, Carole; Valdois, Sylviane; Lallier, Marie; Donnadieu, Sophie

    2015-01-01

    The present study explored the temporal allocation of attention in groups of 8-year-old children, 10-year-old children, and adults performing a rapid serial visual presentation task. In a dual-condition task, participants had to detect a briefly presented target (T2) after identifying an initial target (T1) embedded in a random series of…

  3. The attentional blink in typically developing and reading-disabled children.

    PubMed

    de Groot, Barry J A; van den Bos, Kees P; van der Meulen, Bieuwe F; Minnaert, Alexander E M G

    2015-11-01

    This study's research question was whether selective visual attention, and specifically the attentional blink (AB) as operationalized by a dual target rapid serial visual presentation (RSVP) task, can explain individual differences in word reading (WR) and reading-related phonological performances in typically developing children and reading-disabled subgroups. A total of 407 Dutch school children (Grades 3-6) were classified either as typically developing (n = 302) or as belonging to one of three reading-disabled subgroups: reading disabilities only (RD-only, n = 69), both RD and attention problems (RD+ADHD, n = 16), or both RD and a specific language impairment (RD+SLI, n = 20). The RSVP task employed alphanumeric stimuli that were presented in two blocks. Standardized Dutch tests were used to measure WR, phonemic awareness (PA), and alphanumeric rapid naming (RAN). Results indicate that, controlling for PA and RAN performance, general RSVP task performance contributes significant unique variance to the prediction of WR. Specifically, consistent group main effects for the parameter of AB(minimum) were found, whereas there were no AB-specific effects (i.e., AB(width) and AB(amplitude)) except for the RD+SLI group. Finally, there was a group by measurement interaction, indicating that the RD-only and comorbid groups are differentially sensitive for prolonged testing sessions. These results suggest that more general factors involved in RSVP processing may explain the group differences found. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Attentional and Perceptual Factors Affecting the Attentional Blink for Faces and Objects

    ERIC Educational Resources Information Center

    Landau, Ayelet N.; Bentin, Shlomo

    2008-01-01

    When 2 different visual targets presented among different distracters in a rapid serial visual presentation (RSVP) are separated by 400 ms or less, detection and identification of the 2nd targets are reduced relative to longer time intervals. This phenomenon, termed the "attentional blink" (AB), is attributed to the temporary engagement…

  5. Improved Accuracy Using Recursive Bayesian Estimation Based Language Model Fusion in ERP-Based BCI Typing Systems

    PubMed Central

    Orhan, U.; Erdogmus, D.; Roark, B.; Oken, B.; Purwar, S.; Hild, K. E.; Fowler, A.; Fried-Oken, M.

    2013-01-01

    RSVP Keyboard™ is an electroencephalography (EEG) based brain computer interface (BCI) typing system, designed as an assistive technology for the communication needs of people with locked-in syndrome (LIS). It relies on rapid serial visual presentation (RSVP) and does not require precise eye gaze control. Existing BCI typing systems which uses event related potentials (ERP) in EEG suffer from low accuracy due to low signal-to-noise ratio. Henceforth, RSVP Keyboard™ utilizes a context based decision making via incorporating a language model, to improve the accuracy of letter decisions. To further improve the contributions of the language model, we propose recursive Bayesian estimation, which relies on non-committing string decisions, and conduct an offline analysis, which compares it with the existing naïve Bayesian fusion approach. The results indicate the superiority of the recursive Bayesian fusion and in the next generation of RSVP Keyboard™ we plan to incorporate this new approach. PMID:23366432

  6. Repetition blindness has a perceptual locus: evidence from online processing of targets in RSVP streams

    NASA Technical Reports Server (NTRS)

    Johnston, James C.; Hochhaus, Larry; Ruthruff, Eric

    2002-01-01

    Four experiments tested whether repetition blindness (RB; reduced accuracy reporting repetitions of briefly displayed items) is a perceptual or a memory-recall phenomenon. RB was measured in rapid serial visual presentation (RSVP) streams, with the task altered to reduce memory demands. In Experiment 1 only the number of targets (1 vs. 2) was reported, eliminating the need to remember target identities. Experiment 2 segregated repeated and nonrepeated targets into separate blocks to reduce bias against repeated targets. Experiments 3 and 4 required immediate "online" buttonpress responses to targets as they occurred. All 4 experiments showed very strong RB. Furthermore, the online response data showed clearly that the 2nd of the repeated targets is the one missed. The present results show that in the RSVP paradigm, RB occurs online during initial stimulus encoding and decision making. The authors argue that RB is indeed a perceptual phenomenon.

  7. Temporal allocation of attention toward threat in individuals with posttraumatic stress symptoms.

    PubMed

    Amir, Nader; Taylor, Charles T; Bomyea, Jessica A; Badour, Christal L

    2009-12-01

    Research suggests that individuals with posttraumatic stress disorder (PTSD) selectively attend to threat-relevant information. However, little is known about how initial detection of threat influences the processing of subsequently encountered stimuli. To address this issue, we used a rapid serial visual presentation paradigm (RSVP; Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance, 18, 849-860) to examine temporal allocation of attention to threat-related and neutral stimuli in individuals with PTSD symptoms (PTS), traumatized individuals without PTSD symptoms (TC), and non-anxious controls (NAC). Participants were asked to identify one or two targets in an RSVP stream. Typically processing of the first target decreases accuracy of identifying the second target as a function of the temporal lag between targets. Results revealed that the PTS group was significantly more accurate in detecting a neutral target when it was presented 300 or 500ms after threat-related stimuli compared to when the target followed neutral stimuli. These results suggest that individuals with PTSD may process trauma-relevant information more rapidly and efficiently than benign information.

  8. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    PubMed

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  9. Insights into the Control of Attentional Set in ADHD Using the Attentional Blink Paradigm

    ERIC Educational Resources Information Center

    Mason, Deanna J.; Humphreys, Glyn W.; Kent, Lindsey

    2005-01-01

    Background: Previous work on visual selective attention in Attention Deficit Hyperactivity Disorder (ADHD) has utilised spatial search paradigms. This study compared ADHD to control children on a temporal search task using Rapid Serial Visual Presentation (RSVP). In addition, the effects of irrelevant singleton distractors on search performance…

  10. Brief time course of trait anxiety-related attentional bias to fear-conditioned stimuli: Evidence from the dual-RSVP task.

    PubMed

    Booth, Robert W

    2017-03-01

    Attentional bias to threat is a much-studied feature of anxiety; it is typically assessed using response time (RT) tasks such as the dot probe. Findings regarding the time course of attentional bias have been inconsistent, possibly because RT tasks are sensitive to processes downstream of attention. Attentional bias was assessed using an accuracy-based task in which participants detected a single digit in two simultaneous rapid serial visual presentation (RSVP) streams of letters. Before the target, two coloured shapes were presented simultaneously, one in each RSVP stream; one shape had previously been associated with threat through Pavlovian fear conditioning. Attentional bias was indicated wherever participants identified targets in the threat's RSVP stream more accurately than targets in the other RSVP stream. In 87 unselected undergraduates, trait anxiety only predicted attentional bias when the target was presented immediately following the shapes, i.e. 160 ms later; by 320 ms the bias had disappeared. This suggests attentional bias in anxiety can be extremely brief and transitory. This initial study utilised an analogue sample, and was unable to physiologically verify the efficacy of the conditioning. The next steps will be to verify these results in a sample of diagnosed anxious patients, and to use alternative threat stimuli. The results of studies using response time to assess the time course of attentional bias may partially reflect later processes such as decision making and response preparation. This may limit the efficacy of therapies aiming to retrain attentional biases using response time tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Attentional awakening: gradual modulation of temporal attention in rapid serial visual presentation.

    PubMed

    Ariga, Atsunori; Yokosawa, Kazuhiko

    2008-03-01

    Orienting attention to a point in time facilitates processing of an item within rapidly changing surroundings. We used a one-target RSVP task to look for differences in accuracy in reporting a target related to when the target temporally appeared in the sequence. The results show that observers correctly report a target early in the sequence less frequently than later in the sequence. Previous RSVP studies predicted equivalently accurate performances for one target wherever it appeared in the sequence. We named this new phenomenon attentional awakening, which reflects a gradual modulation of temporal attention in a rapid sequence.

  12. Attentional load inhibits vection.

    PubMed

    Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2011-07-01

    In this study, we examined the effects of cognitive task performance on the induction of vection. We hypothesized that, if vection requires attentional resources, performing cognitive tasks requiring attention should inhibit or weaken it. Experiment 1 tested the effects on vection of simultaneously performing a rapid serial visual presentation (RSVP) task. The results revealed that the RSVP task affected the subjective strength of vection. Experiment 2 tested the effects of a multiple-object-tracking (MOT) task on vection. Simultaneous performance of the MOT task decreased the duration and subjective strength of vection. Taken together, these findings suggest that vection induction requires attentional resources.

  13. Distractor-Induced Blindness: A Special Case of Contingent Attentional Capture?

    PubMed Central

    Winther, Gesche N.; Niedeggen, Michael

    2017-01-01

    The detection of a salient visual target embedded in a rapid serial visual presentation (RSVP) can be severely affected if target-like distractors are presented previously. This phenomenon, known as distractor-induced blindness (DIB), shares the prerequisites of contingent attentional capture (Folk, Remington, & Johnston, 1992). In both, target processing is transiently impaired by the presentation of distractors defined by similar features. In the present study, we investigated whether the speeded response to a target in the DIB paradigm can be described in terms of a contingent attentional capture process. In the first experiments, multiple distractors were embedded in the RSVP stream. Distractors either shared the target’s visual features (Experiment 1A) or differed from them (Experiment 1B). Congruent with hypotheses drawn from contingent attentional capture theory, response times (RTs) were exclusively impaired in conditions with target-like distractors. However, RTs were not impaired if only one single target-like distractor was presented (Experiment 2). If attentional capture directly contributed to DIB, the single distractor should be sufficient to impair target processing. In conclusion, DIB is not due to contingent attentional capture, but may rely on a central suppression process triggered by multiple distractors. PMID:28439320

  14. Repetition Blindness for Natural Images of Objects with Viewpoint Changes

    PubMed Central

    Buffat, Stéphane; Plantier, Justin; Roumes, Corinne; Lorenceau, Jean

    2013-01-01

    When stimuli are repeated in a rapid serial visual presentation (RSVP), observers sometimes fail to report the second occurrence of a target. This phenomenon is referred to as “repetition blindness” (RB). We report an RSVP experiment with photographs in which we manipulated object viewpoints between the first and second occurrences of a target (0°, 45°, or 90° changes), and spatial frequency (SF) content. Natural images were spatially filtered to produce low, medium, or high SF stimuli. RB was observed for all filtering conditions. Surprisingly, for full-spectrum (FS) images, RB increased significantly as the viewpoint reached 90°. For filtered images, a similar pattern of results was found for all conditions except for medium SF stimuli. These findings suggest that object recognition in RSVP are subtended by viewpoint-specific representations for all spatial frequencies except medium ones. PMID:23346069

  15. Character Decomposition and Transposition Processes of Chinese Compound Words in Rapid Serial Visual Presentation.

    PubMed

    Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei

    2017-01-01

    Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.

  16. Does Emotion Help or Hinder Immediate Memory?: Arousal Versus Priority-Binding Mechanisms

    ERIC Educational Resources Information Center

    Hadley, Christopher B.; MacKay, Donald G.

    2006-01-01

    People recall taboo words better than neutral words in many experimental contexts. The present rapid serial visual presentation (RSVP) experiments demonstrated this taboo-superiority effect for immediate recall of mixed lists containing taboo and neutral words matched for familiarity, length, and category coherence. Under binding theory (MacKay et…

  17. New learning following reactivation in the human brain: targeting emotional memories through rapid serial visual presentation.

    PubMed

    Wirkner, Janine; Löw, Andreas; Hamm, Alfons O; Weymar, Mathias

    2015-03-01

    Once reactivated, previously consolidated memories destabilize and have to be reconsolidated to persist, a process that might be altered non-invasively by interfering learning immediately after reactivation. Here, we investigated the influence of interference on brain correlates of reactivated episodic memories for emotional and neutral scenes using event-related potentials (ERPs). To selectively target emotional memories we applied a new reactivation method: rapid serial visual presentation (RSVP). RSVP leads to enhanced implicit processing (pop out) of the most salient memories making them vulnerable to disruption. In line, interference after reactivation of previously encoded pictures disrupted recollection particularly for emotional events. Furthermore, memory impairments were reflected in a reduced centro-parietal ERP old/new difference during retrieval of emotional pictures. These results provide neural evidence that emotional episodic memories in humans can be selectively altered through behavioral interference after reactivation, a finding with further clinical implications for the treatment of anxiety disorders. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Short-term memory for pictures seen once or twice.

    PubMed

    Martini, Paolo; Maljkovic, Vera

    2009-06-01

    The present study is concerned with the effects of exposure time, repetition, spacing and lag on old/new recognition memory for generic visual scenes presented in a RSVP paradigm. Early memory studies with verbal material found that knowledge of total exposure time at study is sufficient to accurately predict memory performance at test (the Total Time Hypothesis), irrespective of number of repetitions, spacing or lag. However, other studies have disputed such simple dependence of memory strength on total study time, demonstrating super-additive facilitatory effects of spacing and lag, as well as inhibitory effects, such as the Ranschburg effect, Repetition Blindness and the Attentional Blink. In the experimental conditions of the present study we find no evidence of either facilitatory or inhibitory effects: recognition memory for pictures in RSVP supports the Total Time Hypothesis. The data are consistent with an Unequal-Variance Signal Detection Theory model of memory that assumes the average strength and the variance of the familiarity of pictures both increase with total study time. The main conclusion is that the growth of visual scene familiarity with temporal exposure and repetition is a stochastically independent process.

  19. Transient Distraction and Attentional Control during a Sustained Selective Attention Task.

    PubMed

    Demeter, Elise; Woldorff, Marty G

    2016-07-01

    Distracting stimuli in the environment can pull our attention away from our goal-directed tasks. fMRI studies have implicated regions in right frontal cortex as being particularly important for processing distractors [e.g., de Fockert, J. W., & Theeuwes, J. Role of frontal cortex in attentional capture by singleton distractors. Brain and Cognition, 80, 367-373, 2012; Demeter, E., Hernandez-Garcia, L., Sarter, M., & Lustig, C. Challenges to attention: A continuous arterial spin labeling (ASL) study of the effects of distraction on sustained attention. Neuroimage, 54, 1518-1529, 2011]. Less is known, however, about the timing and sequence of how right frontal or other brain regions respond selectively to distractors and how distractors impinge upon the cascade of processes related to detecting and processing behaviorally relevant target stimuli. Here we used EEG and ERPs to investigate the neural consequences of a perceptually salient but task-irrelevant distractor on the detection of rare target stimuli embedded in a rapid, serial visual presentation (RSVP) stream. We found that distractors that occur during the presentation of a target interfere behaviorally with detection of those targets, reflected by reduced detection rates, and that these missed targets show a reduced amplitude of the long-latency, detection-related P3 component. We also found that distractors elicited a right-lateralized frontal negativity beginning at 100 msec, whose amplitude negatively correlated across participants with their distraction-related behavioral impairment. Finally, we also quantified the instantaneous amplitude of the steady-state visual evoked potentials elicited by the RSVP stream and found that the occurrence of a distractor resulted in a transient amplitude decrement of the steady-state visual evoked potential, presumably reflecting the pull of attention away from the RSVP stream when distracting stimuli occur in the environment.

  20. Lateralization of spatial rather than temporal attention underlies the left hemifield advantage in rapid serial visual presentation.

    PubMed

    Asanowicz, Dariusz; Kruse, Lena; Śmigasiewicz, Kamila; Verleger, Rolf

    2017-11-01

    In bilateral rapid serial visual presentation (RSVP), the second of two targets, T1 and T2, is better identified in the left visual field (LVF) than in the right visual field (RVF). This LVF advantage may reflect hemispheric asymmetry in temporal attention or/and in spatial orienting of attention. Participants performed two tasks: the "standard" bilateral RSVP task (Exp.1) and its unilateral variant (Exp.1 & 2). In the bilateral task, spatial location was uncertain, thus target identification involved stimulus-driven spatial orienting. In the unilateral task, the targets were presented block-wise in the LVF or RVF only, such that no spatial orienting was needed for target identification. Temporal attention was manipulated in both tasks by varying the T1-T2 lag. The results showed that the LVF advantage disappeared when involvement of stimulus-driven spatial orienting was eliminated, whereas the manipulation of temporal attention had no effect on the asymmetry. In conclusion, the results do not support the hypothesis of hemispheric asymmetry in temporal attention, and provide further evidence that the LVF advantage reflects right hemisphere predominance in stimulus-driven orienting of spatial attention. These conclusions fit evidence that temporal attention is implemented by bilateral parietal areas and spatial attention by the right-lateralized ventral frontoparietal network. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Repetition Blindness: Out of Sight or Out of Mind?

    ERIC Educational Resources Information Center

    Morris, Alison L.; Harris, Catherine L.

    2004-01-01

    Does repetition blindness represent a failure of perception or of memory? In Experiment 1, participants viewed rapid serial visual presentation (RSVP) sentences. When critical words (C1 and C2) were orthographically similar, C2 was frequently omitted from serial report; however, repetition priming for C2 on a postsentence lexical decision task was…

  2. Resting EEG in Alpha and Beta Bands Predicts Individual Differences in Attentional Blink Magnitude

    ERIC Educational Resources Information Center

    MacLean, Mary H.; Arnell, Karen M.; Cote, Kimberly A.

    2012-01-01

    Accuracy for a second target (T2) is reduced when it is presented within 500 ms of a first target (T1) in a rapid serial visual presentation (RSVP)--an attentional blink (AB). There are reliable individual differences in the magnitude of the AB. Recent evidence has shown that the attentional approach that an individual typically adopts during a…

  3. Immersive visualization for navigation and control of the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Hartman, Frank R.; Cooper, Brian; Maxwell, Scott; Wright, John; Yen, Jeng

    2004-01-01

    The Rover Sequencing and Visualization Program (RSVP) is a suite of tools for sequencing of planetary rovers, which are subject to significant light time delay and thus are unsuitable for teleoperation.

  4. Temporal Target Integration Underlies Performance at Lag 1 in the Attentional Blink

    ERIC Educational Resources Information Center

    Akyurek, Elkan G.; Eshuis, Sander A. H.; Nieuwenstein, Mark R.; Saija, Jefta D.; Baskent, Deniz; Hommel, Bernhard

    2012-01-01

    When two targets follow each other directly in rapid serial visual presentation (RSVP), they are often identified correctly but reported in the wrong order. These order reversals are commonly explained in terms of the rate at which the two targets are processed, the idea being that the second target can sometimes overtake the first in the race…

  5. Different Attentional Blink Tasks Reflect Distinct Information Processing Limitations: An Individual Differences Approach

    ERIC Educational Resources Information Center

    Kelly, Ashleigh J.; Dux, Paul E.

    2011-01-01

    To study the temporal dynamics and capacity-limits of attentional selection and encoding, researchers often employ the attentional blink (AB) phenomenon: subjects' impaired ability to report the second of two targets in a rapid serial visual presentation (RSVP) stream that appear within 200-500 ms of one another. The AB has now been the subject of…

  6. Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search

    PubMed Central

    Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.

    2017-01-01

    In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073

  7. Out of sight but not out of mind: the neurophysiology of iconic memory in the superior temporal sulcus.

    PubMed

    Keysers, C; Xiao, D-K; Foldiak, P; Perrett, D I

    2005-05-01

    Iconic memory, the short-lasting visual memory of a briefly flashed stimulus, is an important component of most models of visual perception. Here we investigate what physiological mechanisms underlie this capacity by showing rapid serial visual presentation (RSVP) sequences with and without interstimulus gaps to human observers and macaque monkeys. For gaps of up to 93 ms between consecutive images, human observers and neurones in the temporal cortex of macaque monkeys were found to continue processing a stimulus as if it was still present on the screen. The continued firing of neurones in temporal cortex may therefore underlie iconic memory. Based on these findings, a neurophysiological vision of iconic memory is presented.

  8. Distractor Devaluation Effect in the Attentional Blink: Direct Evidence for Distractor Inhibition

    ERIC Educational Resources Information Center

    Kihara, Ken; Yagi, Yoshihiko; Takeda, Yuji; Kawahara, Jun I.

    2011-01-01

    When two targets (T1 and T2) are embedded in rapid serial visual presentation (RSVP), T2 is often missed (attentional blink, AB) if T2 follows T1 by less than 500 ms. Some have proposed that inhibition of a distractor following T1 contributes to the AB, but no direct evidence supports this proposal. This study examined distractor inhibition by…

  9. Conceptual short-term memory (CSTM) supports core claims of Christiansen and Chater.

    PubMed

    Potter, Mary C

    2016-01-01

    Rapid serial visual presentation (RSVP) of words or pictured scenes provides evidence for a large-capacity conceptual short-term memory (CSTM) that momentarily provides rich associated material from long-term memory, permitting rapid chunking (Potter 1993; 2009; 2012). In perception of scenes as well as language comprehension, we make use of knowledge that briefly exceeds the supposed limits of working memory.

  10. Attentional blink in children with attention deficit hyperactivity disorder.

    PubMed

    Amador-Campos, Juan A; Aznar-Casanova, J Antonio; Bezerra, Izabela; Torro-Alves, Nelson; Sánchez, Manuel M

    2015-01-01

    To explore the temporal mechanism of attention in children with attention deficit hyperactivity disorder (ADHD) and controls using a rapid serial visual presentation (RSVP) task in which two letters (T1 and T2) were presented in close temporal proximity among distractors (attentional blink [AB]). Thirty children aged between 9 and 13 years (12 with ADHD combined type and 18 controls) took part in the study. Both groups performed two kinds of RSVP task. In the single task, participants simply had to identify a target letter (T1), whereas in the dual task, they had to identify a target letter (T1) and a probe letter (T2). The ADHD and control groups were equivalent in their single-task performance. However, in the dual-task condition, there were significant between-group differences in the rate of detection of the probe letter (T2) at lag + 1 and lag + 4. The ADHD group exhibited a larger overall AB compared with controls. Our findings provide support for a link between ADHD and attentional blink.

  11. Psychophysics of reading. XVII. Low-vision performance with four types of electronically magnified text.

    PubMed

    Harland, S; Legge, G E; Luebker, A

    1998-03-01

    Most people with low vision need magnification to read. Page navigation is the process of moving a magnifier during reading. Modern electronic technology can provide many alternatives for navigating through text. This study compared reading speeds for four methods of displaying text. The four methods varied in their page-navigation demands. The closed-circuit television (CCTV) and MOUSE methods involved manual navigation. The DRIFT method (horizontally drifting text) involved no manual navigation, but did involve both smooth-pursuit and saccadic eye movements. The rapid serial visual presentation (RSVP) method involved no manual navigation, and relatively few eye movements. There were 7 normal subjects and 12 low-vision subjects (7 with central-field loss, CFL group, and 5 with central fields intact, CFI group). The subjects read 70-word passages at speeds that yielded good comprehension. Taking the CCTV reading speed as a benchmark, neither the normal nor low-vision subjects had significantly different speeds with the MOUSE method. As expected from the reduced navigational demands, normal subjects read faster with the DRIFT method (85% faster) and the RSVP method (169%). The CFI group read significantly faster with DRIFT (43%) and RSVP (38%). The CFL group showed no significant differences in reading speed for the four methods.

  12. Prioritized Identification of Attractive and Romantic Partner Faces in Rapid Serial Visual Presentation.

    PubMed

    Nakamura, Koyo; Arai, Shihoko; Kawabata, Hideaki

    2017-11-01

    People are sensitive to facial attractiveness because it is an important biological and social signal. As such, our perceptual and attentional system seems biased toward attractive faces. We tested whether attractive faces capture attention and enhance memory access in an involuntary manner using a dual-task rapid serial visual presentation (dtRSVP) paradigm, wherein multiple faces were successively presented for 120 ms. In Experiment 1, participants (N = 26) were required to identify two female faces embedded in a stream of animal faces as distractors. The results revealed that identification of the second female target (T2) was better when it was attractive compared to neutral or unattractive. In Experiment 2, we investigated whether perceived attractiveness affects T2 identification (N = 27). To this end, we performed another dtRSVP task involving participants in a romantic partnership with the opposite sex, wherein T2 was their romantic partner's face. The results demonstrated that a romantic partner's face was correctly identified more often than was the face of a friend or unknown person. Furthermore, the greater the intensity of passionate love participants felt for their partner (as measured by the Passionate Love Scale), the more often they correctly identified their partner's face. Our experiments indicate that attractive and romantic partners' faces facilitate the identification of the faces in an involuntary manner.

  13. Interactions between space-based and feature-based attention.

    PubMed

    Leonard, Carly J; Balestreri, Angela; Luck, Steven J

    2015-02-01

    Although early research suggested that attention to nonspatial features (i.e., red) was confined to stimuli appearing at an attended spatial location, more recent research has emphasized the global nature of feature-based attention. For example, a distractor sharing a target feature may capture attention even if it occurs at a task-irrelevant location. Such findings have been used to argue that feature-based attention operates independently of spatial attention. However, feature-based attention may nonetheless interact with spatial attention, yielding larger feature-based effects at attended locations than at unattended locations. The present study tested this possibility. In 2 experiments, participants viewed a rapid serial visual presentation (RSVP) stream and identified a target letter defined by its color. Target-colored distractors were presented at various task-irrelevant locations during the RSVP stream. We found that feature-driven attentional capture effects were largest when the target-colored distractor was closer to the attended location. These results demonstrate that spatial attention modulates the strength of feature-based attention capture, calling into question the prior evidence that feature-based attention operates in a global manner that is independent of spatial attention.

  14. Memory and event-related potentials for rapidly presented emotional pictures.

    PubMed

    Versace, Francesco; Bradley, Margaret M; Lang, Peter J

    2010-08-01

    Dense array event-related potentials (ERPs) and memory performance were assessed following rapid serial visual presentation (RSVP) of emotional and neutral pictures. Despite the extremely brief presentation, emotionally arousing pictures prompted an enhanced negative voltage over occipital sensors, compared to neutral pictures, replicating previous encoding effects. Emotionally arousing pictures were also remembered better in a subsequent recognition test, with higher hit rates and better discrimination performance. ERPs measured during the recognition test showed both an early (250-350 ms) frontally distributed difference between hits and correct rejections, and a later (400-500 ms), more centrally distributed difference, consistent with effects of recognition on ERPs typically found using slower presentation rates. The data are consistent with the hypothesis that features of affective pictures pop out during rapid serial visual presentation, prompting better memory performance.

  15. Method for enhancing single-trial P300 detection by introducing the complexity degree of image information in rapid serial visual presentation tasks

    PubMed Central

    Lin, Zhimin; Zeng, Ying; Tong, Li; Zhang, Hangming; Zhang, Chi

    2017-01-01

    The application of electroencephalogram (EEG) generated by human viewing images is a new thrust in image retrieval technology. A P300 component in the EEG is induced when the subjects see their point of interest in a target image under the rapid serial visual presentation (RSVP) experimental paradigm. We detected the single-trial P300 component to determine whether a subject was interested in an image. In practice, the latency and amplitude of the P300 component may vary in relation to different experimental parameters, such as target probability and stimulus semantics. Thus, we proposed a novel method, Target Recognition using Image Complexity Priori (TRICP) algorithm, in which the image information is introduced in the calculation of the interest score in the RSVP paradigm. The method combines information from the image and EEG to enhance the accuracy of single-trial P300 detection on the basis of traditional single-trial P300 detection algorithm. We defined an image complexity parameter based on the features of the different layers of a convolution neural network (CNN). We used the TRICP algorithm to compute for the complexity of an image to quantify the effect of different complexity images on the P300 components and training specialty classifier according to the image complexity. We compared TRICP with the HDCA algorithm. Results show that TRICP is significantly higher than the HDCA algorithm (Wilcoxon Sign Rank Test, p<0.05). Thus, the proposed method can be used in other and visual task-related single-trial event-related potential detection. PMID:29283998

  16. Unique sudden onsets capture attention even when observers are in feature-search mode.

    PubMed

    Spalek, Thomas M; Yanko, Matthew R; Poiese, Paola; Lagroix, Hayley E P

    2012-01-01

    Two sources of attentional capture have been proposed: stimulus-driven (exogenous) and goal-oriented (endogenous). A resolution between these modes of capture has not been straightforward. Even such a clearly exogenous event as the sudden onset of a stimulus can be said to capture attention endogenously if observers operate in singleton-detection mode rather than feature-search mode. In four experiments we show that a unique sudden onset captures attention even when observers are in feature-search mode. The displays were rapid serial visual presentation (RSVP) streams of differently coloured letters with the target letter defined by a specific colour. Distractors were four #s, one of the target colour, surrounding one of the non-target letters. Capture was substantially reduced when the onset of the distractor array was not unique because it was preceded by other sets of four grey # arrays in the RSVP stream. This provides unambiguous evidence that attention can be captured both exogenously and endogenously within a single task.

  17. Mood-specific effects in the allocation of attention across time.

    PubMed

    Rokke, Paul D; Lystad, Chad M

    2015-01-01

    Participants completed single and dual rapid serial visual presentation (RSVP) tasks. Across five experiments, either the mood of the participant or valence of the target was manipulated to create pairings in which the critical target was either mood congruent or mood noncongruent. When the second target (T2) in an RSVP stream was congruent with the participant's mood, performance was enhanced. This was true for happy and sad moods and in single- and dual-task conditions. In contrast, the effects of congruence varied when the focus was on the first target (T1). When in a sad mood and having attended to a sad T1, detection of a neutral T2 was impaired, resulting in a stronger attentional blink (AB). There was no effect of stimulus-mood congruence for T1 when in a happy mood. It was concluded that mood-congruence is important for stimulus detection, but that sadness uniquely influences post-identification processing when attention is first focused on mood-congruent information.

  18. Attention and P300-based BCI performance in people with amyotrophic lateral sclerosis

    PubMed Central

    Riccio, Angela; Simione, Luca; Schettini, Francesca; Pizzimenti, Alessia; Inghilleri, Maurizio; Belardinelli, Marta Olivetti; Mattia, Donatella; Cincotti, Febo

    2013-01-01

    The purpose of this study was to investigate the support of attentional and memory processes in controlling a P300-based brain-computer interface (BCI) in people with amyotrophic lateral sclerosis (ALS). Eight people with ALS performed two behavioral tasks: (i) a rapid serial visual presentation (RSVP) task, screening the temporal filtering capacity and the speed of the update of the attentive filter, and (ii) a change detection task, screening the memory capacity and the spatial filtering capacity. The participants were also asked to perform a P300-based BCI spelling task. By using correlation and regression analyses, we found that only the temporal filtering capacity in the RSVP task was a predictor of both the P300-based BCI accuracy and of the amplitude of the P300 elicited performing the BCI task. We concluded that the ability to keep the attentional filter active during the selection of a target influences performance in BCI control. PMID:24282396

  19. The influence of attention on value integration.

    PubMed

    Kunar, Melina A; Watson, Derrick G; Tsetsos, Konstantinos; Chater, Nick

    2017-08-01

    People often have to make decisions based on many pieces of information. Previous work has found that people are able to integrate values presented in a rapid serial visual presentation (RSVP) stream to make informed judgements on the overall stream value (Tsetsos et al. Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659-9664, 2012). It is also well known that attentional mechanisms influence how people process information. However, it is unknown how attentional factors impact value judgements of integrated material. The current study is the first of its kind to investigate whether value judgements are influenced by attentional processes when assimilating information. Experiments 1-3 examined whether the attentional salience of an item within an RSVP stream affected judgements of overall stream value. The results showed that the presence of an irrelevant high or low value salient item biased people to judge the stream as having a higher or lower overall mean value, respectively. Experiments 4-7 directly tested Tsetsos et al.'s (Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659-9664, 2012) theory examining whether extreme values in an RSVP stream become over-weighted, thereby capturing attention more than other values in the stream. The results showed that the presence of both a high (Experiments 4, 6 and 7) and a low (Experiment 5) value outlier captures attention leading to less accurate report of subsequent items in the stream. Taken together, the results showed that valuations can be influenced by attentional processes, and can lead to less accurate subjective judgements.

  20. RSVP: An experimental organization.

    PubMed

    Oram, P G

    1967-09-01

    RSVP is a volunteer organization of psychologists formed to facilitate participation in community activities. Its first venture was in working with 10 tutoring programs in the Boston area. Emphasis in the first year was on discovering areas in which psychologists could be helpful. Projects included group discussion leadership, workshops, and recruitment of tutors and professionals. At present the organization is attempting to broaden the number and kind of activities in which it is engaged and is facing a number of questions relative to future programs. The members consider that RSVP has been a successful experiment.

  1. Perceptual processing of natural scenes at rapid rates: Effects of complexity, content, and emotional arousal

    PubMed Central

    Bradley, Margaret M.; Lang, Peter J.

    2013-01-01

    During rapid serial visual presentation (RSVP), the perceptual system is confronted with a rapidly changing array of sensory information demanding resolution. At rapid rates of presentation, previous studies have found an early (e.g., 150–280 ms) negativity over occipital sensors that is enhanced when emotional, as compared with neutral, pictures are viewed, suggesting facilitated perception. In the present study, we explored how picture composition and the presence of people in the image affect perceptual processing of pictures of natural scenes. Using RSVP, pictures that differed in perceptual composition (figure–ground or scenes), content (presence of people or not), and emotional content (emotionally arousing or neutral) were presented in a continuous stream for 330 ms each with no intertrial interval. In both subject and picture analyses, all three variables affected the amplitude of occipital negativity, with the greatest enhancement for figure–ground compositions (as compared with scenes), irrespective of content and emotional arousal, supporting an interpretation that ease of perceptual processing is associated with enhanced occipital negativity. Viewing emotional pictures prompted enhanced negativity only for pictures that depicted people, suggesting that specific features of emotionally arousing images are associated with facilitated perceptual processing, rather than all emotional content. PMID:23780520

  2. Filtering versus parallel processing in RSVP tasks.

    PubMed

    Botella, J; Eriksen, C W

    1992-04-01

    An experiment of McLean, D. E. Broadbent, and M. H. P. Broadbent (1983) using rapid serial visual presentation (RSVP) was replicated. A series of letters in one of 5 colors was presented, and the subject was asked to identify the letter that appeared in a designated color. There were several innovations in our procedure, the most important of which was the use of a response menu. After each trial, the subject was presented with 7 candidate letters from which to choose his/her response. In three experimental conditions, the target, the letter following the target, and all letters other than the target were, respectively, eliminated from the menu. In other conditions, the stimulus list was manipulated by repeating items in the series, repeating the color of successive items, or even eliminating the target color. By means of these manipulations, we were able to determine more precisely the information that subjects had obtained from the presentation of the stimulus series. Although we replicated the results of McLean et al. (1983), the more extensive information that our procedure produced was incompatible with the serial filter model that McLean et al. had used to describe their data. Overall, our results were more compatible with a parallel-processing account. Furthermore, intrusion errors are apparently not only a perceptual phenomenon but a memory problem as well.

  3. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink.

    PubMed

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading.

  4. Development, implementation, and evaluation of a community- and hospital-based respiratory syncytial virus prophylaxis program.

    PubMed

    Bracht, Marianne; Heffer, Michael; O'Brien, Karel

    2005-02-01

    To implement and deliver a respiratory syncytial virus prophylaxis (RSVP) program in response to the Canadian Pediatric Society recommendations. A novel program was designed to provide inpatient RSVP for at-risk infants cared for in 1 tertiary care newborn intensive care unit (NICU). This inpatient program was part of a coordinated approach to RSVP, designed and implemented by 3 hospitals. An RSVP program logic model was created and used by a multidisciplinary team to evaluate the in-house program and identify areas of program activity requiring improvement. Following the 2000 to 2001 RSV season, a compliance and outcomes audit was performed in the tertiary center; 193 infants were enrolled in the RSVP program and 162 infants had received RSVP in the NICU [Mean = 1.64 doses]. Telephone follow-up with the parents of discharged infants identified that 159 infants (98%) had successfully completed their full course of RSVP. Using the RSVP program logic model, 5 areas for program improvement were identified including infant recruitment, patient transfer/discharge processes, product procurement, preparation/distribution/administration of doses, and healthcare team communication. Interdisciplinary collaboration is an important factor in the success of the RSVP program and has supported a consistent model of care for the delivery of RSVP. The program logic model provided a useful structure to systematically review the RSVP program in this organization.

  5. The pieces fit: Constituent structure and global coherence of visual narrative in RSVP.

    PubMed

    Hagmann, Carl Erick; Cohn, Neil

    2016-02-01

    Recent research has shown that comprehension of visual narrative relies on the ordering and timing of sequential images. Here we tested if rapidly presented 6-image long visual sequences could be understood as coherent narratives. Half of the sequences were correctly ordered and half had two of the four internal panels switched. Participants reported whether the sequence was correctly ordered and rated its coherence. Accuracy in detecting a switch increased when panels were presented for 1 s rather than 0.5 s. Doubling the duration of the first panel did not affect results. When two switched panels were further apart, order was discriminated more accurately and coherence ratings were low, revealing that a strong local adjacency effect influenced order and coherence judgments. Switched panels at constituent boundaries or within constituents were most disruptive to order discrimination, indicating that the preservation of constituent structure is critical to visual narrative grammar. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Multi-brain fusion and applications to intelligence analysis

    NASA Astrophysics Data System (ADS)

    Stoica, A.; Matran-Fernandez, A.; Andreou, D.; Poli, R.; Cinel, C.; Iwashita, Y.; Padgett, C.

    2013-05-01

    In a rapid serial visual presentation (RSVP) images are shown at an extremely rapid pace. Yet, the images can still be parsed by the visual system to some extent. In fact, the detection of specific targets in a stream of pictures triggers a characteristic electroencephalography (EEG) response that can be recognized by a brain-computer interface (BCI) and exploited for automatic target detection. Research funded by DARPA's Neurotechnology for Intelligence Analysts program has achieved speed-ups in sifting through satellite images when adopting this approach. This paper extends the use of BCI technology from individual analysts to collaborative BCIs. We show that the integration of information in EEGs collected from multiple operators results in performance improvements compared to the single-operator case.

  7. Open and closed cortico-subcortical loops: A neuro-computational account of access to consciousness in the distractor-induced blindness paradigm.

    PubMed

    Ebner, Christian; Schroll, Henning; Winther, Gesche; Niedeggen, Michael; Hamker, Fred H

    2015-09-01

    How the brain decides which information to process 'consciously' has been debated over for decades without a simple explanation at hand. While most experiments manipulate the perceptual energy of presented stimuli, the distractor-induced blindness task is a prototypical paradigm to investigate gating of information into consciousness without or with only minor visual manipulation. In this paradigm, subjects are asked to report intervals of coherent dot motion in a rapid serial visual presentation (RSVP) stream, whenever these are preceded by a particular color stimulus in a different RSVP stream. If distractors (i.e., intervals of coherent dot motion prior to the color stimulus) are shown, subjects' abilities to perceive and report intervals of target dot motion decrease, particularly with short delays between intervals of target color and target motion. We propose a biologically plausible neuro-computational model of how the brain controls access to consciousness to explain how distractor-induced blindness originates from information processing in the cortex and basal ganglia. The model suggests that conscious perception requires reverberation of activity in cortico-subcortical loops and that basal-ganglia pathways can either allow or inhibit this reverberation. In the distractor-induced blindness paradigm, inadequate distractor-induced response tendencies are suppressed by the inhibitory 'hyperdirect' pathway of the basal ganglia. If a target follows such a distractor closely, temporal aftereffects of distractor suppression prevent target identification. The model reproduces experimental data on how delays between target color and target motion affect the probability of target detection. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Single-trial EEG RSVP classification using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Shamwell, Jared; Lee, Hyungtae; Kwon, Heesung; Marathe, Amar R.; Lawhern, Vernon; Nothwang, William

    2016-05-01

    Traditionally, Brain-Computer Interfaces (BCI) have been explored as a means to return function to paralyzed or otherwise debilitated individuals. An emerging use for BCIs is in human-autonomy sensor fusion where physiological data from healthy subjects is combined with machine-generated information to enhance the capabilities of artificial systems. While human-autonomy fusion of physiological data and computer vision have been shown to improve classification during visual search tasks, to date these approaches have relied on separately trained classification models for each modality. We aim to improve human-autonomy classification performance by developing a single framework that builds codependent models of human electroencephalograph (EEG) and image data to generate fused target estimates. As a first step, we developed a novel convolutional neural network (CNN) architecture and applied it to EEG recordings of subjects classifying target and non-target image presentations during a rapid serial visual presentation (RSVP) image triage task. The low signal-to-noise ratio (SNR) of EEG inherently limits the accuracy of single-trial classification and when combined with the high dimensionality of EEG recordings, extremely large training sets are needed to prevent overfitting and achieve accurate classification from raw EEG data. This paper explores a new deep CNN architecture for generalized multi-class, single-trial EEG classification across subjects. We compare classification performance from the generalized CNN architecture trained across all subjects to the individualized XDAWN, HDCA, and CSP neural classifiers which are trained and tested on single subjects. Preliminary results show that our CNN meets and slightly exceeds the performance of the other classifiers despite being trained across subjects.

  9. Illusory conjunctions in the time domain and the resulting time-course of the attentional blink.

    PubMed

    Botella, Juan; Arend, Isabel; Suero, Manuel

    2004-05-01

    Illusory conjunctions in the time domain are errors made in binding stimulus features presented In the same spatial position in Rapid Serial Visual Presentation (RSVP) conditions. Botella, Barriopedro, and Suero (2001) devised a model to explain how the distribution of responses originating from stimuli around the target in the series is generated. They proposed two routes consisting of two sequential attempts to make a response. The second attempt (sophisticated guessing) is only employed if the first one (focal attention) fails in producing an integrated perception. This general outline enables specific predictions to be made and tested related to the efficiency of focal attention in generating responses in the first attempt. Participants had to report the single letter in an RSVP stream of letters that was presented in a previously specified color (first target, T1) and then report whether an X (second target, T2) was or was not presented. Performance on T2 showed the typical U-shaped function across the T1-T2 lag that reflects the attentional blink phenomenon. However, as was predicted by Botella, Barriopedro, and Suero's model, the time-course of the interference was shorter for trials with a correct response to T1 than for trials with a T1 error. Furthermore, longer time-courses of interference associated with pre-target and post-target errors to the first target were indistinguishable.

  10. Reading Speed Does Not Benefit from Increased Line Spacing in AMD Patients

    PubMed Central

    CHUNG, SUSANA T. L.; JARVIS, SAMUEL H.; WOO, STANLEY Y.; HANSON, KARA; JOSE, RANDALL T.

    2009-01-01

    Purpose Crowding, the adverse spatial interaction due to the proximity of adjacent targets, has been suggested as an explanation for slow reading in peripheral vision. Previously, we showed that increased line spacing, which presumably reduces crowding between adjacent lines of text, improved reading speed in the normal periphery (Chung, Optom Vis Sci 2004;81:525–35). The purpose of this study was to examine whether or not individuals with age-related macular degeneration (AMD) would benefit from increased line spacing for reading. Methods Experiment 1: Eight subjects with AMD read aloud 100-word passages rendered at five line spacings: the standard single spacing, 1.5×, 2×, 3×, and 4× the standard spacing. Print sizes were 1× and 2× of the critical print size. Reading time and number of reading errors for each passage were measured to compute the reading speed. Experiment 2: Four subjects with AMD read aloud sequences of six 4-letter words, presented on a computer monitor using the rapid serial visual presentation (RSVP) paradigm. Target words were presented singly, or flanked above and below by two other words that changed in synchrony with the target word, at various vertical word separations. Print size was 2× the critical print size. Reading speed was calculated based on the RSVP exposure duration that yielded 80% of the words read correctly. Results Averaged across subjects, reading speeds for passages were virtually constant for the range of line spacings tested. For sequences of unrelated words, reading speeds were also virtually constant for the range of vertical word separations tested, except at the smallest (standard) separation at which reading speed was lower. Conclusions Contrary to the previous finding that reading speed improved in normal peripheral vision, increased line spacing in passages, or increased vertical separation between words in RSVP, did not lead to improved reading speed in people with AMD. PMID:18772718

  11. Non-singleton colors are not attended faster than categories, but they are encoded faster: A combined approach of behavior, modeling and ERPs.

    PubMed

    Callahan-Flintoft, Chloe; Wyble, Brad

    2017-11-01

    The visual system is able to detect targets according to a variety of criteria, such as by categorical (letter vs digit) or featural attributes (color). These criteria are often used interchangeably in rapid serial visual presentation (RSVP) studies but little is known about how rapidly they are processed. The aim of this work was to compare the time course of attentional selection and memory encoding for different types of target criteria. We conducted two experiments where participants reported one or two targets (T1, T2) presented in lateral RSVP streams. Targets were marked either by being a singleton color (red letter among black letters), being categorically distinct (digits among letters) or non-singleton color (target color letter among heterogeneously colored letters). Using event related potential (ERPs) associated with attention and memory encoding (the N2pc and the P3 respectively), we compared the relative latency of these two processing stages for these three kinds of targets. In addition to these ERP measures, we obtained convergent behavioral measures for attention and memory encoding by presenting two targets in immediate sequence and comparing their relative accuracy and proportion of temporal order errors. Both behavioral and EEG measures revealed that singleton color targets were attended much more quickly than either non-singleton color or categorical targets, and there was very little difference between attention latencies to non-singleton color and categorical targets. There was however a difference in the speed of memory encoding for non-singleton color and category latencies in both behavioral and EEG measures, which shows that encoding latency differences do not always mirror attention latency differences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. 45 CFR 2553.41 - Who is eligible to be a RSVP volunteer?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 4 2012-10-01 2012-10-01 false Who is eligible to be a RSVP volunteer? 2553.41... AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.41 Who is eligible to be a RSVP volunteer? (a) To be an RSVP volunteer, an...

  13. 45 CFR 2553.41 - Who is eligible to be a RSVP volunteer?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 4 2014-10-01 2014-10-01 false Who is eligible to be a RSVP volunteer? 2553.41... AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.41 Who is eligible to be a RSVP volunteer? (a) To be an RSVP volunteer, an...

  14. 45 CFR 2553.41 - Who is eligible to be a RSVP volunteer?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false Who is eligible to be a RSVP volunteer? 2553.41... AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.41 Who is eligible to be a RSVP volunteer? (a) To be an RSVP volunteer, an...

  15. 45 CFR 2553.41 - Who is eligible to be a RSVP volunteer?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false Who is eligible to be a RSVP volunteer? 2553.41... AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.41 Who is eligible to be a RSVP volunteer? (a) To be an RSVP volunteer, an...

  16. The processing of images of biological threats in visual short-term memory.

    PubMed

    Quinlan, Philip T; Yue, Yue; Cohen, Dale J

    2017-08-30

    The idea that there is enhanced memory for negatively, emotionally charged pictures was examined. Performance was measured under rapid, serial visual presentation (RSVP) conditions in which, on every trial, a sequence of six photo-images was presented. Briefly after the offset of the sequence, two alternative images (a target and a foil) were presented and participants attempted to choose which image had occurred in the sequence. Images were of threatening and non-threatening cats and dogs. The target depicted either an animal expressing an emotion distinct from the other images, or the sequences contained only images depicting the same emotional valence. Enhanced memory was found for targets that differed in emotional valence from the other sequence images, compared to targets that expressed the same emotional valence. Further controls in stimulus selection were then introduced and the same emotional distinctiveness effect obtained. In ruling out possible visual and attentional accounts of the data, an informal dual route topic model is discussed. This places emphasis on how visual short-term memory reveals a sensitivity to the emotional content of the input as it unfolds over time. Items that present with a distinctive emotional content stand out in memory. © 2017 The Author(s).

  17. 45 CFR 2553.51 - What are the terms of service of a RSVP volunteer?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false What are the terms of service of a RSVP volunteer... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.51 What are the terms of service of a RSVP volunteer? A RSVP volunteer shall serve weekly on a...

  18. 45 CFR 2553.51 - What are the terms of service of a RSVP volunteer?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false What are the terms of service of a RSVP volunteer... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.51 What are the terms of service of a RSVP volunteer? A RSVP volunteer shall serve weekly on a...

  19. 45 CFR 2553.51 - What are the terms of service of a RSVP volunteer?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false What are the terms of service of a RSVP volunteer... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.51 What are the terms of service of a RSVP volunteer? A RSVP volunteer shall serve weekly on a...

  20. 45 CFR 2553.51 - What are the terms of service of a RSVP volunteer?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 4 2012-10-01 2012-10-01 false What are the terms of service of a RSVP volunteer... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.51 What are the terms of service of a RSVP volunteer? A RSVP volunteer shall serve weekly on a...

  1. 45 CFR 2553.51 - What are the terms of service of a RSVP volunteer?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 4 2014-10-01 2014-10-01 false What are the terms of service of a RSVP volunteer... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.51 What are the terms of service of a RSVP volunteer? A RSVP volunteer shall serve weekly on a...

  2. Brain-computer interface with language model-electroencephalography fusion for locked-in syndrome.

    PubMed

    Oken, Barry S; Orhan, Umut; Roark, Brian; Erdogmus, Deniz; Fowler, Andrew; Mooney, Aimee; Peters, Betts; Miller, Meghan; Fried-Oken, Melanie B

    2014-05-01

    Some noninvasive brain-computer interface (BCI) systems are currently available for locked-in syndrome (LIS) but none have incorporated a statistical language model during text generation. To begin to address the communication needs of individuals with LIS using a noninvasive BCI that involves rapid serial visual presentation (RSVP) of symbols and a unique classifier with electroencephalography (EEG) and language model fusion. The RSVP Keyboard was developed with several unique features. Individual letters are presented at 2.5 per second. Computer classification of letters as targets or nontargets based on EEG is performed using machine learning that incorporates a language model for letter prediction via Bayesian fusion enabling targets to be presented only 1 to 4 times. Nine participants with LIS and 9 healthy controls were enrolled. After screening, subjects first calibrated the system, and then completed a series of balanced word generation mastery tasks that were designed with 5 incremental levels of difficulty, which increased by selecting phrases for which the utility of the language model decreased naturally. Six participants with LIS and 9 controls completed the experiment. All LIS participants successfully mastered spelling at level 1 and one subject achieved level 5. Six of 9 control participants achieved level 5. Individuals who have incomplete LIS may benefit from an EEG-based BCI system, which relies on EEG classification and a statistical language model. Steps to further improve the system are discussed.

  3. The attentional blink reveals serial working memory encoding: evidence from virtual and human event-related potentials.

    PubMed

    Craston, Patrick; Wyble, Brad; Chennu, Srivas; Bowman, Howard

    2009-03-01

    Observers often miss a second target (T2) if it follows an identified first target item (T1) within half a second in rapid serial visual presentation (RSVP), a finding termed the attentional blink. If two targets are presented in immediate succession, however, accuracy is excellent (Lag 1 sparing). The resource sharing hypothesis proposes a dynamic distribution of resources over a time span of up to 600 msec during the attentional blink. In contrast, the ST(2) model argues that working memory encoding is serial during the attentional blink and that, due to joint consolidation, Lag 1 is the only case where resources are shared. Experiment 1 investigates the P3 ERP component evoked by targets in RSVP. The results suggest that, in this context, P3 amplitude is an indication of bottom-up strength rather than a measure of cognitive resource allocation. Experiment 2, employing a two-target paradigm, suggests that T1 consolidation is not affected by the presentation of T2 during the attentional blink. However, if targets are presented in immediate succession (Lag 1 sparing), they are jointly encoded into working memory. We use the ST(2) model's neural network implementation, which replicates a range of behavioral results related to the attentional blink, to generate "virtual ERPs" by summing across activation traces. We compare virtual to human ERPs and show how the results suggest a serial nature of working memory encoding as implied by the ST(2) model.

  4. Age-Related Changes in the Ability to Switch between Temporal and Spatial Attention.

    PubMed

    Callaghan, Eleanor; Holland, Carol; Kessler, Klaus

    2017-01-01

    Background : Identifying age-related changes in cognition that contribute towards reduced driving performance is important for the development of interventions to improve older adults' driving and prolong the time that they can continue to drive. While driving, one is often required to switch from attending to events changing in time, to distribute attention spatially. Although there is extensive research into both spatial attention and temporal attention and how these change with age, the literature on switching between these modalities of attention is limited within any age group. Methods : Age groups (21-30, 40-49, 50-59, 60-69 and 70+ years) were compared on their ability to switch between detecting a target in a rapid serial visual presentation (RSVP) stream and detecting a target in a visual search display. To manipulate the cost of switching, the target in the RSVP stream was either the first item in the stream (Target 1st), towards the end of the stream (Target Mid), or absent from the stream (Distractor Only). Visual search response times and accuracy were recorded. Target 1st trials behaved as no-switch trials, as attending to the remaining stream was not necessary. Target Mid and Distractor Only trials behaved as switch trials, as attending to the stream to the end was required. Results : Visual search response times (RTs) were longer on "Target Mid" and "Distractor Only" trials in comparison to "Target 1st" trials, reflecting switch-costs. Larger switch-costs were found in both the 40-49 and 60-69 years group in comparison to the 21-30 years group when switching from the Target Mid condition. Discussion : Findings warrant further exploration as to whether there are age-related changes in the ability to switch between these modalities of attention while driving. If older adults display poor performance when switching between temporal and spatial attention while driving, then the development of an intervention to preserve and improve this ability would be beneficial.

  5. [Allocation of attentional resource and monitoring processes under rapid serial visual presentation].

    PubMed

    Nishiura, K

    1998-08-01

    With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.

  6. Reading speed benefits from increased vertical word spacing in normal peripheral vision.

    PubMed

    Chung, Susana T L

    2004-07-01

    Crowding, the adverse spatial interaction due to proximity of adjacent targets, has been suggested as an explanation for slow reading in peripheral vision. The purposes of this study were to (1) demonstrate that crowding exists at the word level and (2) examine whether or not reading speed in central and peripheral vision can be enhanced with increased vertical word spacing. Five normal observers read aloud sequences of six unrelated four-letter words presented on a computer monitor, one word at a time, using rapid serial visual presentation (RSVP). Reading speeds were calculated based on the RSVP exposure durations yielding 80% correct. Testing was conducted at the fovea and at 5 degrees and 10 degrees in the inferior visual field. Critical print size (CPS) for each observer and at each eccentricity was first determined by measuring reading speeds for four print sizes using unflanked words. We then presented words at 0.8x or 1.4x CPS, with each target word flanked by two other words, one above and one below the target word. Reading speeds were determined for vertical word spacings (baseline-to-baseline separation between two vertically separated words) ranging from 0.8x to 2x the standard single-spacing, as well as the unflanked condition. At the fovea, reading speed increased with vertical word spacing up to about 1.2x to 1.5x the standard spacing and remained constant and similar to the unflanked reading speed at larger vertical word spacings. In the periphery, reading speed also increased with vertical word spacing, but it remained below the unflanked reading speed for all spacings tested. At 2x the standard spacing, peripheral reading speed was still about 25% lower than the unflanked reading speed for both eccentricities and print sizes. Results from a control experiment showed that the greater reliance of peripheral reading speed on vertical word spacing was also found in the right visual field. Increased vertical word spacing, which presumably decreases the adverse effect of crowding between adjacent lines of text, benefits reading speed. This benefit is greater in peripheral than central vision.

  7. Processing new and repeated names: Effects of coreference on repetition priming with speech and fast RSVP

    PubMed Central

    Camblin, C. Christine; Ledoux, Kerry; Boudewyn, Megan; Gordon, Peter C.; Swaab, Tamara Y.

    2006-01-01

    Previous research has shown that the process of establishing coreference with a repeated name can affect basic repetition priming. Specifically, repetition priming on some measures can be eliminated for repeated names that corefer with an entity that is prominent in the discourse model. However, the exact nature and timing of this modulating effect of discourse are not yet understood. Here, we present two ERP studies that further probe the nature of repeated name coreference by using naturally produced connected speech and fast-rate RSVP methods of presentation. With speech we found that repetition priming was eliminated for repeated names that coreferred with a prominent antecedent. In contrast, with fast-rate RSVP, we found a main effect of repetition that did not interact with sentence context. This indicates that the creation of a discourse model during comprehension can affect repetition priming, but the nature of this effect may depend on input speed. PMID:16904078

  8. Blinded by taboo words in L1 but not L2.

    PubMed

    Colbeck, Katie L; Bowers, Jeffrey S

    2012-04-01

    The present study compares the emotionality of English taboo words in native English speakers and native Chinese speakers who learned English as a second language. Neutral and taboo/sexual words were included in a Rapid Serial Visual Presentation (RSVP) task as to-be-ignored distracters in a short- and long-lag condition. Compared with neutral distracters, taboo/sexual distracters impaired the performance in the short-lag condition only. Of critical note, however, is that the performance of Chinese speakers was less impaired by taboo/sexual distracters. This supports the view that a first language is more emotional than a second language, even when words are processed quickly and automatically. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  9. 45 CFR 2553.43 - What cost reimbursements are provided to RSVP volunteers?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: (1) Accident insurance. Accident insurance covers RSVP volunteers for personal injury during travel...) Excess automobile liability insurance. (i) For RSVP volunteers who drive in connection with their service... volunteers carry on their own automobiles; or (B) The limits of applicable state financial responsibility law...

  10. 45 CFR 2553.43 - What cost reimbursements are provided to RSVP volunteers?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: (1) Accident insurance. Accident insurance covers RSVP volunteers for personal injury during travel...) Excess automobile liability insurance. (i) For RSVP volunteers who drive in connection with their service... volunteers carry on their own automobiles; or (B) The limits of applicable state financial responsibility law...

  11. 45 CFR 2553.43 - What cost reimbursements are provided to RSVP volunteers?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: (1) Accident insurance. Accident insurance covers RSVP volunteers for personal injury during travel...) Excess automobile liability insurance. (i) For RSVP volunteers who drive in connection with their service... volunteers carry on their own automobiles; or (B) The limits of applicable state financial responsibility law...

  12. 45 CFR 2553.43 - What cost reimbursements are provided to RSVP volunteers?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: (1) Accident insurance. Accident insurance covers RSVP volunteers for personal injury during travel...) Excess automobile liability insurance. (i) For RSVP volunteers who drive in connection with their service... volunteers carry on their own automobiles; or (B) The limits of applicable state financial responsibility law...

  13. 45 CFR 2553.43 - What cost reimbursements are provided to RSVP volunteers?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: (1) Accident insurance. Accident insurance covers RSVP volunteers for personal injury during travel...) Excess automobile liability insurance. (i) For RSVP volunteers who drive in connection with their service... volunteers carry on their own automobiles; or (B) The limits of applicable state financial responsibility law...

  14. R.S.V.P. Teacher-Coordinator Handbook.

    ERIC Educational Resources Information Center

    1983

    Alaska Teacher-Coordinators in the Rural Student Vocational Program (RSVP) should be familiar with activities necessary to insure proper selection of students and proper and meaningful work placement. In RSVP, students from outlying high schools travel to Anchorage, Fairbanks, or Juneau to participate in 2 weeks of work experience with cooperating…

  15. Developmental Changes in the Visual Span for Reading

    PubMed Central

    Kwon, MiYoung; Legge, Gordon E.; Dubbels, Brock R.

    2007-01-01

    The visual span for reading refers to the range of letters, formatted as in text, that can be recognized reliably without moving the eyes. It is likely that the size of the visual span is determined primarily by characteristics of early visual processing. It has been hypothesized that the size of the visual span imposes a fundamental limit on reading speed (Legge, Mansfield, & Chung, 2001). The goal of the present study was to investigate developmental changes in the size of the visual span in school-age children, and the potential impact of these changes on children’s reading speed. The study design included groups of 10 children in 3rd, 5th, and 7th grade, and 10 adults. Visual span profiles were measured by asking participants to recognize letters in trigrams (random strings of three letters) flashed for 100 ms at varying letter positions left and right of the fixation point. Two print sizes (0.25° and 1.0°) were used. Over a block of trials, a profile was built up showing letter recognition accuracy (% correct) versus letter position. The area under this profile was defined to be the size of the visual span. Reading speed was measured in two ways: with Rapid Serial Visual Presentation (RSVP) and with short blocks of text (termed Flashcard presentation). Consistent with our prediction, we found that the size of the visual span increased linearly with grade level and it was significantly correlated with reading speed for both presentation methods. Regression analysis using the size of the visual span as a predictor indicated that 34% to 52% of variability in reading speeds can be accounted for by the size of the visual span. These findings are consistent with a significant role of early visual processing in the development of reading skills. PMID:17845810

  16. Remember to blink: Reduced attentional blink following instructions to forget.

    PubMed

    Taylor, Tracy L

    2018-04-24

    This study used rapid serial visual presentation (RSVP) to determine whether, in an item-method directed forgetting task, study word processing ends earlier for forget words than for remember words. The critical manipulation required participants to monitor an RSVP stream of black nonsense strings in which a single blue word was embedded. The next item to follow the word was a string of red fs that instructed the participant to forget the word or green rs that instructed the participant to remember the word. After the memory instruction, a probe string of black xs or os appeared at postinstruction positions 1-8. Accuracy in reporting the identity of the probe string revealed an attenuated attentional blink following instructions to forget. A yes-no recognition task that followed the study trials confirmed a directed forgetting effect, with better recognition of remember words than forget words. Considered in the context of control conditions that required participants to commit either all or none of the study words to memory, the pattern of probe identification accuracy following the directed forgetting task argues that an intention to forget releases limited-capacity attentional resources sooner than an instruction to remember-despite participants needing to maintain an ongoing rehearsal set in both cases.

  17. Do emotion-induced blindness and the attentional blink share underlying mechanisms? An event-related potential study of emotionally-arousing words.

    PubMed

    MacLeod, Jeffrey; Stewart, Brandie M; Newman, Aaron J; Arnell, Karen M

    2017-06-01

    When two targets are presented within approximately 500 ms of each other in the context of rapid serial visual presentation (RSVP), participants' ability to report the second target is reduced compared to when the targets are presented further apart in time. This phenomenon is known as the attentional blink (AB). The AB is increased in magnitude when the first target is emotionally arousing. Emotionally arousing stimuli can also capture attention and create an AB-like effect even when these stimuli are presented as to-be-ignored distractor items in a single-target RSVP task. This phenomenon is known as emotion-induced blindness (EIB). The phenomenological similarity in the behavioral results associated with the AB with an emotional T1 and EIB suggest that these effects may result from similar underlying mechanisms - a hypothesis that we tested using event-related electrical brain potentials (ERPs). Behavioral results replicated those reported previously, demonstrating an enhanced AB following an emotionally arousing target and a clear EIB effect. In both paradigms highly arousing taboo/sexual words resulted in an increased early posterior negativity (EPN) component that has been suggested to represent early semantic activation and selection for further processing in working memory. In both paradigms taboo/sexual words also produced an increased late positive potential (LPP) component that has been suggested to represent consolidation of a stimulus in working memory. Therefore, ERP results provide evidence that the EIB and emotion-enhanced AB effects share a common underlying mechanism.

  18. 75 FR 20570 - Information Collection; Submission for OMB Review, Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-20

    ... instrument is to measure community impact of RSVP grantees. Comment 9. Three commenters suggested that there... of the instrument is to measure community impact of RSVP grantees and to clarify that the benefit of... purpose of the instrument is to measure community impact of RSVP grantees. Comment 18. One way to minimize...

  19. Response System with Variable Prescriptions (RSVP); A Faculty-Computer Partnership for Enhancement of Individualized Instruction.

    ERIC Educational Resources Information Center

    Kelly, J. Terence; Anandam, Kamala

    Miami-Dade Community College's Response System with Variable Prescriptions (RSVP) is an example of faculty-computer partnership directed toward individualizing instruction while managing up to 5,000 students in a single course, regardless of class format. Individualization of instruction is accomplished by RSVP by virtue of its potential for three…

  20. 45 CFR 2553.52 - Under what circumstances may a RSVP volunteer's service be terminated?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false Under what circumstances may a RSVP volunteer's... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.52 Under what circumstances may a RSVP volunteer's service be terminated? (a) A...

  1. 45 CFR 2553.52 - Under what circumstances may a RSVP volunteer's service be terminated?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false Under what circumstances may a RSVP volunteer's... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.52 Under what circumstances may a RSVP volunteer's service be terminated? (a) A...

  2. 45 CFR 2553.52 - Under what circumstances may a RSVP volunteer's service be terminated?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 4 2014-10-01 2014-10-01 false Under what circumstances may a RSVP volunteer's... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.52 Under what circumstances may a RSVP volunteer's service be terminated? (a) A...

  3. 45 CFR 2553.52 - Under what circumstances may a RSVP volunteer's service be terminated?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Under what circumstances may a RSVP volunteer's... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.52 Under what circumstances may a RSVP volunteer's service be terminated? (a) A...

  4. 45 CFR 2553.52 - Under what circumstances may a RSVP volunteer's service be terminated?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 4 2012-10-01 2012-10-01 false Under what circumstances may a RSVP volunteer's... (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM Volunteer Terms of Service § 2553.52 Under what circumstances may a RSVP volunteer's service be terminated? (a) A...

  5. Page mode reading with simulated scotomas: a modest effect of interline spacing on reading speed.

    PubMed

    Bernard, Jean-Baptiste; Scherlen, Anne-Catherine; Anne-Catherine, Scherlen; Castet, Eric; Eric, Castet

    2007-12-01

    Crowding is thought to be one potent limiting factor of reading in peripheral vision. While several studies investigated how crowding between horizontally adjacent letters or words can influence eccentric reading, little attention has been paid to the influence of vertically adjacent lines of text. The goal of this study was to examine the dependence of page mode reading performance (speed and accuracy) on interline spacing. A gaze-contingent visual display was used to simulate a visual central scotoma while normally sighted observers read meaningful French sentences following MNREAD principles. The sensitivity of this new material to low-level factors was confirmed by showing strong effects of perceptual learning, print size and scotoma size on reading performance. In contrast, reading speed was only slightly modulated by interline spacing even for the largest range tested: a 26% gain for a 178% increase in spacing. This modest effect sharply contrasts with the dramatic influence of vertical word spacing found in a recent RSVP study. This discrepancy suggests either that vertical crowding is minimized when reading meaningful sentences, or that the interaction between crowding and other factors such as attention and/or visuo-motor control is dependent on the paradigm used to assess reading speed (page vs. RSVP mode).

  6. Garrison Institute on Aging-Lubbock Retired and Senior Volunteer Program (RSVP) Provides Services to South Plains, Texas.

    PubMed

    Blackmon, Joan; Boles, Annette N; Reddy, P Hemachandra

    2015-01-01

    The Texas Tech University Health Sciences (TTUHSC) Garrison Institute on Aging (GIA) was established to promote healthy aging through cutting edge research on Alzheimer's disease (AD) and other diseases of aging, through innovative educational and community outreach opportunities for students, clinicians, researchers, health care providers, and the public. The GIA sponsors the Lubbock Retired and Senior Volunteer Program (RSVP). According to RSVP Operations Handbook, RSVP is one of the largest volunteer efforts in the nation. Through this program, volunteer skills and talents can be matched to assist with community needs. It is a federally funded program under the guidance of the Corporation for National and Community Service (CNCS) and Senior Corps (SC). Volunteers that participate in RSVP provide service in the following areas: food security, environmental awareness building and education, community need-based volunteer programs, and veteran services.

  7. Garrison Institute on Aging—Lubbock Retired and Senior Volunteer Program (RSVP) Provides Services to South Plains, Texas

    PubMed Central

    Blackmon, Joan; Boles, Annette N.; Reddy, P. Hemachandra

    2015-01-01

    The Texas Tech University Health Sciences (TTUHSC) Garrison Institute on Aging (GIA) was established to promote healthy aging through cutting edge research on Alzheimer's disease (AD) and other diseases of aging, through innovative educational and community outreach opportunities for students, clinicians, researchers, health care providers, and the public. The GIA sponsors the Lubbock Retired and Senior Volunteer Program (RSVP). According to RSVP Operations Handbook, RSVP is one of the largest volunteer efforts in the nation. Through this program, volunteer skills and talents can be matched to assist with community needs. It is a federally funded program under the guidance of the Corporation for National and Community Service (CNCS) and Senior Corps (SC). Volunteers that participate in RSVP provide service in the following areas: food security, environmental awareness building and education, community need-based volunteer programs, and veteran services. PMID:26696877

  8. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources.

    PubMed

    Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.

  9. Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources

    PubMed Central

    Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng

    2017-01-01

    A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734

  10. Disruption of visual awareness during the attentional blink is reflected by selective disruption of late-stage neural processing

    PubMed Central

    Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.

    2015-01-01

    Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644

  11. Age-Related Changes in the Ability to Switch between Temporal and Spatial Attention

    PubMed Central

    Callaghan, Eleanor; Holland, Carol; Kessler, Klaus

    2017-01-01

    Background: Identifying age-related changes in cognition that contribute towards reduced driving performance is important for the development of interventions to improve older adults’ driving and prolong the time that they can continue to drive. While driving, one is often required to switch from attending to events changing in time, to distribute attention spatially. Although there is extensive research into both spatial attention and temporal attention and how these change with age, the literature on switching between these modalities of attention is limited within any age group. Methods: Age groups (21–30, 40–49, 50–59, 60–69 and 70+ years) were compared on their ability to switch between detecting a target in a rapid serial visual presentation (RSVP) stream and detecting a target in a visual search display. To manipulate the cost of switching, the target in the RSVP stream was either the first item in the stream (Target 1st), towards the end of the stream (Target Mid), or absent from the stream (Distractor Only). Visual search response times and accuracy were recorded. Target 1st trials behaved as no-switch trials, as attending to the remaining stream was not necessary. Target Mid and Distractor Only trials behaved as switch trials, as attending to the stream to the end was required. Results: Visual search response times (RTs) were longer on “Target Mid” and “Distractor Only” trials in comparison to “Target 1st” trials, reflecting switch-costs. Larger switch-costs were found in both the 40–49 and 60–69 years group in comparison to the 21–30 years group when switching from the Target Mid condition. Discussion: Findings warrant further exploration as to whether there are age-related changes in the ability to switch between these modalities of attention while driving. If older adults display poor performance when switching between temporal and spatial attention while driving, then the development of an intervention to preserve and improve this ability would be beneficial. PMID:28261088

  12. 45 CFR 2553.42 - Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 4 2014-10-01 2014-10-01 false Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station? 2553.42 Section 2553.42 Public Welfare Regulations... SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.42 Is a RSVP...

  13. 45 CFR 2553.42 - Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 4 2012-10-01 2012-10-01 false Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station? 2553.42 Section 2553.42 Public Welfare Regulations... SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.42 Is a RSVP...

  14. 45 CFR 2553.42 - Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station? 2553.42 Section 2553.42 Public Welfare Regulations... SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.42 Is a RSVP...

  15. 45 CFR 2553.42 - Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station? 2553.42 Section 2553.42 Public Welfare Regulations... SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.42 Is a RSVP...

  16. 45 CFR 2553.42 - Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false Is a RSVP volunteer a federal employee, an employee of the sponsor or of the volunteer station? 2553.42 Section 2553.42 Public Welfare Regulations... SENIOR VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.42 Is a RSVP...

  17. 45 CFR 2553.41 - Who is eligible to be a RSVP volunteer?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Eligibility to serve as a RSVP volunteer shall not be restricted on the basis of formal education, experience, race, religion, color, national origin, sex, age, handicap or political affiliation. ...

  18. A model of the formation of illusory conjunctions in the time domain.

    PubMed

    Botella, J; Suero, M; Barriopedro, M I

    2001-12-01

    The authors present a model to account for the miscombination of features when stimuli are presented using the rapid serial visual presentation (RSVP) technique (illusory conjunctions in the time domain). It explains the distributions of responses through a mixture of trial outcomes. In some trials, attention is successfully focused on the target, whereas in others, the responses are based on partial information. Two experiments are presented that manipulated the mean processing time of the target-defining dimension and of the to-be-reported dimension, respectively. As predicted, the average origin of the responses is delayed when lengthening the target-defining dimension, whereas it is earlier when lengthening the to-be-reported dimension; in the first case the number of correct responses is dramatically reduced, whereas in the second it does not change. The results, a review of other research, and simulations carried out with a formal version of the model are all in close accordance with the predictions.

  19. Let's face the music: a behavioral and electrophysiological exploration of score reading.

    PubMed

    Gunter, Thomas C; Schmidt, Björn-Helmer; Besson, Mireille

    2003-09-01

    This experiment was carried out to determine whether reading diatonic violations in a musical score elicits similar endogenous ERP components when hearing such violations in the auditory modality. In the behavioral study, musicians were visually presented with 120 scores of familiar musical pieces, half of which contained a diatonic violation. The score was presented in a measure-by-measure manner. Self-paced reading was significantly delayed for measures containing a violation, indicating that sight reading a violation requires additional effort. In the ERP study, the musical phrases were presented in a "RSVP"-like manner. We predicted that diatonic violations would elicit a late positive component. However, the ERP associated with the measure where a violation was presented showed a negativity instead. The negativity started around 100 ms and lasted for the entire recording period. This long-lasting negativity encompassed at least three distinct effects that were possibly related to violation detection, working memory processing, and a further integration/interpretation process.

  20. 45 CFR 2553.12 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... to help determine the impact of an RSVP project on the community, including the volunteers... reasonable accommodation, can perform the essential functions of a volunteer position that such individual... application, in which RSVP volunteers are recruited, enrolled, and placed on assignments. (p) Sponsor. A...

  1. 45 CFR 2553.12 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to help determine the impact of an RSVP project on the community, including the volunteers... reasonable accommodation, can perform the essential functions of a volunteer position that such individual... application, in which RSVP volunteers are recruited, enrolled, and placed on assignments. (p) Sponsor. A...

  2. 45 CFR 2553.12 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to help determine the impact of an RSVP project on the community, including the volunteers... reasonable accommodation, can perform the essential functions of a volunteer position that such individual... application, in which RSVP volunteers are recruited, enrolled, and placed on assignments. (p) Sponsor. A...

  3. 45 CFR 2553.12 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to help determine the impact of an RSVP project on the community, including the volunteers... reasonable accommodation, can perform the essential functions of a volunteer position that such individual... application, in which RSVP volunteers are recruited, enrolled, and placed on assignments. (p) Sponsor. A...

  4. Does the mean adequately represent reading performance? Evidence from a cross-linguistic study

    PubMed Central

    Marinelli, Chiara V.; Horne, Joanna K.; McGeown, Sarah P.; Zoccolotti, Pierluigi; Martelli, Marialuisa

    2014-01-01

    Reading models are largely based on the interpretation of average data from normal or impaired readers, mainly drawn from English-speaking individuals. In the present study we evaluated the possible contribution of orthographic consistency in generating individual differences in reading behavior. We compared the reading performance of young adults speaking English (one of the most irregular orthographies) and Italian (a very regular orthography). In the 1st experiment we presented 22 English and 30 Italian readers with 5-letter words using the Rapid Serial Visual Presentation (RSVP) paradigm. In a 2nd experiment, we evaluated a new group of 26 English and 32 Italian proficient readers through the RSVP procedure and lists matched in the two languages for both number of phonemes and letters. The results of the two experiments indicate that English participants read at a similar rate but with much greater individual differences than the Italian participants. In a 3rd experiment, we extended these results to a vocal reaction time (vRT) task, examining the effect of word frequency. An ex-Gaussian distribution analysis revealed differences between languages in the size of the exponential parameter (tau) and in the variance (sigma), but not the mean, of the Gaussian component. Notably, English readers were more variable for both tau and sigma than Italian readers. The pattern of performance in English individuals runs counter to models of performance in timed tasks (Faust et al., 1999; Myerson et al., 2003) which envisage a general relationship between mean performance and variability; indeed, this relationship does not hold in the case of the English participants. The present data highlight the importance of developing reading models that not only capture mean level performance, but also variability across individuals, especially in order to account for cross-linguistic differences in reading behavior. PMID:25191289

  5. 75 FR 1608 - Proposed Information Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-12

    ... Corporation is soliciting comments concerning its proposed Stakeholder Assessment of Senior Corps RSVP... will be used by the community partners of current Senior Corps grantees for the national RSVP re... methods: (1) By mail sent to: Corporation for National and Community Service, Senior Corps; Attention...

  6. 45 CFR 2553.62 - What are the responsibilities of a volunteer station?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... that impact critical human and social needs, and regularly assess those assignments for continued...) Comply with all applicable civil rights laws and regulations including reasonable accommodation for RSVP volunteers with disabilities; and (f) Provide assigned RSVP volunteers the following support: (1) Orientation...

  7. Rapid Surveillance for Vector Presence (RSVP): Development of a novel system for detecting Aedes aegypti and Aedes albopictus.

    PubMed

    Montgomery, Brian L; Shivas, Martin A; Hall-Mendelin, Sonja; Edwards, Jim; Hamilton, Nicholas A; Jansen, Cassie C; McMahon, Jamie L; Warrilow, David; van den Hurk, Andrew F

    2017-03-01

    The globally important Zika, dengue and chikungunya viruses are primarily transmitted by the invasive mosquitoes, Aedes aegypti and Aedes albopictus. In Australia, there is an increasing risk that these species may invade highly urbanized regions and trigger outbreaks. We describe the development of a Rapid Surveillance for Vector Presence (RSVP) system to expedite presence- absence surveys for both species. We developed a methodology that uses molecular assays to efficiently screen pooled ovitrap (egg trap) samples for traces of target species ribosomal RNA. Firstly, specific real-time reverse transcription-polymerase chain reaction (RT-PCR) assays were developed which detect a single Ae. aegypti or Ae. albopictus first instar larva in samples containing 4,999 and 999 non-target mosquitoes, respectively. ImageJ software was evaluated as an automated egg counting tool using ovitrap collections obtained from Brisbane, Australia. Qualitative assessment of ovistrips was required prior to automation because ImageJ did not differentiate between Aedes eggs and other objects or contaminants on 44.5% of ovistrips assessed, thus compromising the accuracy of egg counts. As a proof of concept, the RSVP was evaluated in Brisbane, Rockhampton and Goomeri, locations where Ae. aegypti is considered absent, present, and at the margin of its range, respectively. In Brisbane, Ae. aegypti was not detected in 25 pools formed from 477 ovitraps, comprising ≈ 54,300 eggs. In Rockhampton, Ae. aegypti was detected in 4/6 pools derived from 45 ovitraps, comprising ≈ 1,700 eggs. In Goomeri, Ae. aegypti was detected in 5/8 pools derived from 62 ovitraps, comprising ≈ 4,200 eggs. RSVP can rapidly detect nucleic acids from low numbers of target species within large samples of endemic species aggregated from multiple ovitraps. This screening capability facilitates deployment of ovitrap configurations of varying spatial scales, from a single residential block to entire suburbs or towns. RSVP is a powerful tool for surveillance of invasive Aedes spp., validation of species eradication and quality assurance for vector control operations implemented during disease outbreaks.

  8. Rapid Surveillance for Vector Presence (RSVP): Development of a novel system for detecting Aedes aegypti and Aedes albopictus

    PubMed Central

    Montgomery, Brian L.; Shivas, Martin A.; Hall-Mendelin, Sonja; Edwards, Jim; Hamilton, Nicholas A.; Jansen, Cassie C.; McMahon, Jamie L.; Warrilow, David

    2017-01-01

    Background The globally important Zika, dengue and chikungunya viruses are primarily transmitted by the invasive mosquitoes, Aedes aegypti and Aedes albopictus. In Australia, there is an increasing risk that these species may invade highly urbanized regions and trigger outbreaks. We describe the development of a Rapid Surveillance for Vector Presence (RSVP) system to expedite presence- absence surveys for both species. Methodology/Principal findings We developed a methodology that uses molecular assays to efficiently screen pooled ovitrap (egg trap) samples for traces of target species ribosomal RNA. Firstly, specific real-time reverse transcription-polymerase chain reaction (RT-PCR) assays were developed which detect a single Ae. aegypti or Ae. albopictus first instar larva in samples containing 4,999 and 999 non-target mosquitoes, respectively. ImageJ software was evaluated as an automated egg counting tool using ovitrap collections obtained from Brisbane, Australia. Qualitative assessment of ovistrips was required prior to automation because ImageJ did not differentiate between Aedes eggs and other objects or contaminants on 44.5% of ovistrips assessed, thus compromising the accuracy of egg counts. As a proof of concept, the RSVP was evaluated in Brisbane, Rockhampton and Goomeri, locations where Ae. aegypti is considered absent, present, and at the margin of its range, respectively. In Brisbane, Ae. aegypti was not detected in 25 pools formed from 477 ovitraps, comprising ≈ 54,300 eggs. In Rockhampton, Ae. aegypti was detected in 4/6 pools derived from 45 ovitraps, comprising ≈ 1,700 eggs. In Goomeri, Ae. aegypti was detected in 5/8 pools derived from 62 ovitraps, comprising ≈ 4,200 eggs. Conclusions/Significance RSVP can rapidly detect nucleic acids from low numbers of target species within large samples of endemic species aggregated from multiple ovitraps. This screening capability facilitates deployment of ovitrap configurations of varying spatial scales, from a single residential block to entire suburbs or towns. RSVP is a powerful tool for surveillance of invasive Aedes spp., validation of species eradication and quality assurance for vector control operations implemented during disease outbreaks. PMID:28339458

  9. 76 FR 17659 - National Center for Complementary and Alternative Medicine Announcement of Stakeholder Roundtable

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-30

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health National Center for... meaningful interaction, space is limited. To attend, please RSVP by Friday, April 1, 2011, by contacting.... To allow for meaningful interaction, space is limited. To attend, please RSVP by Friday, April 1...

  10. Ultrafast scene detection and recognition with limited visual information

    PubMed Central

    Hagmann, Carl Erick; Potter, Mary C.

    2016-01-01

    Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing. PMID:28255263

  11. 45 CFR 2553.92 - What legal coverage does the Corporation make available to RSVP volunteers?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... available to RSVP volunteers? 2553.92 Section 2553.92 Public Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM... volunteers? It is within the Corporation's discretion to determine if Counsel is employed and counsel fees...

  12. 45 CFR 2553.92 - What legal coverage does the Corporation make available to RSVP volunteers?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... available to RSVP volunteers? 2553.92 Section 2553.92 Public Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM... volunteers? It is within the Corporation's discretion to determine if Counsel is employed and counsel fees...

  13. 45 CFR 2553.92 - What legal coverage does the Corporation make available to RSVP volunteers?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... available to RSVP volunteers? 2553.92 Section 2553.92 Public Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM... volunteers? It is within the Corporation's discretion to determine if Counsel is employed and counsel fees...

  14. 45 CFR 2553.92 - What legal coverage does the Corporation make available to RSVP volunteers?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... available to RSVP volunteers? 2553.92 Section 2553.92 Public Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM... volunteers? It is within the Corporation's discretion to determine if Counsel is employed and counsel fees...

  15. 45 CFR 2553.92 - What legal coverage does the Corporation make available to RSVP volunteers?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... available to RSVP volunteers? 2553.92 Section 2553.92 Public Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM... volunteers? It is within the Corporation's discretion to determine if Counsel is employed and counsel fees...

  16. National Retired Senior Volunteer Program Participant Impact Evaluation. Final Report.

    ERIC Educational Resources Information Center

    Booz Allen and Hamilton, Inc., Washington, DC.

    A study examined the long-term effects of participation in the Retired Senior Volunteer Program (RSVP) on participants from 20 RSVP projects nationwide. Three rounds of interviews were conducted. In Round 1, 750 volunteers were interviewed: 595 veteran volunteers and 155 new volunteers. In Round 2, 792 volunteers were intereviewed: 175 new…

  17. 45 CFR 2553.71 - What is the process for application and award of a grant?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Ensuring innovation and geographic, demographic, and programmatic diversity across the Corporation's RSVP... by a review team made up of trained individuals who are knowledgeable about RSVP, including current... the appropriate use of Federal funds as embodied in a protocol for fiscal management; (v) To what...

  18. Converting CSV Files to RKSML Files

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Liebersbach, Robert

    2009-01-01

    A computer program converts, into a format suitable for processing on Earth, files of downlinked telemetric data pertaining to the operation of the Instrument Deployment Device (IDD), which is a robot arm on either of the Mars Explorer Rovers (MERs). The raw downlinked data files are in comma-separated- value (CSV) format. The present program converts the files into Rover Kinematics State Markup Language (RKSML), which is an Extensible Markup Language (XML) format that facilitates representation of operations of the IDD and enables analysis of the operations by means of the Rover Sequencing Validation Program (RSVP), which is used to build sequences of commanded operations for the MERs. After conversion by means of the present program, the downlinked data can be processed by RSVP, enabling the MER downlink operations team to play back the actual IDD activity represented by the telemetric data against the planned IDD activity. Thus, the present program enhances the diagnosis of anomalies that manifest themselves as differences between actual and planned IDD activities.

  19. Perceptual Repetition Blindness Effects

    NASA Technical Reports Server (NTRS)

    Hochhaus, Larry; Johnston, James C.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    The phenomenon of repetition blindness (RB) may reveal a new limitation on human perceptual processing. Recently, however, researchers have attributed RB to post-perceptual processes such as memory retrieval and/or reporting biases. The standard rapid serial visual presentation (RSVP) paradigm used in most RB studies is, indeed, open to such objections. Here we investigate RB using a "single-frame" paradigm introduced by Johnston and Hale (1984) in which memory demands are minimal. Subjects made only a single judgement about whether one masked target word was the same or different than a post-target probe. Confidence ratings permitted use of signal detection methods to assess sensitivity and bias effects. In the critical condition for RB a precue of the post-target word was provided prior to the target stimulus (identity precue), so that the required judgement amounted to whether the target did or did not repeat the precue word. In control treatments, the precue was either an unrelated word or a dummy.

  20. Concurrent Memory Load Can Make RSVP Search More Efficient

    ERIC Educational Resources Information Center

    Gil-Gomez de Liano, Beatriz; Botella, Juan

    2011-01-01

    The detrimental effect of increased memory load on selective attention has been demonstrated in many situations. However, in search tasks over time using RSVP methods, it is not clear how memory load affects attentional processes; no effects as well as beneficial and detrimental effects of memory load have been found in these types of tasks. The…

  1. 45 CFR 2553.91 - What legal limitations apply to the operation of the RSVP Program and to the expenditure of grant...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... definition, development, practice, and expression of its religious beliefs, provided that it does not use... volunteer with a disability is qualified to serve. (g) Religious activities. (1) A RSVP volunteer or a member of the project staff funded by the Corporation shall not give religious instruction, conduct...

  2. 45 CFR 2553.91 - What legal limitations apply to the operation of the RSVP Program and to the expenditure of grant...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... definition, development, practice, and expression of its religious beliefs, provided that it does not use... volunteer with a disability is qualified to serve. (g) Religious activities. (1) A RSVP volunteer or a member of the project staff funded by the Corporation shall not give religious instruction, conduct...

  3. 45 CFR 2553.91 - What legal limitations apply to the operation of the RSVP Program and to the expenditure of grant...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... definition, development, practice, and expression of its religious beliefs, provided that it does not use... volunteer with a disability is qualified to serve. (g) Religious activities. (1) A RSVP volunteer or a member of the project staff funded by the Corporation shall not give religious instruction, conduct...

  4. 45 CFR 2553.91 - What legal limitations apply to the operation of the RSVP Program and to the expenditure of grant...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... definition, development, practice, and expression of its religious beliefs, provided that it does not use... volunteer with a disability is qualified to serve. (g) Religious activities. (1) A RSVP volunteer or a member of the project staff funded by the Corporation shall not give religious instruction, conduct...

  5. 45 CFR 2553.91 - What legal limitations apply to the operation of the RSVP Program and to the expenditure of grant...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... definition, development, practice, and expression of its religious beliefs, provided that it does not use... volunteer with a disability is qualified to serve. (g) Religious activities. (1) A RSVP volunteer or a member of the project staff funded by the Corporation shall not give religious instruction, conduct...

  6. Rural Student Vocational Program (RSVP) [and] Housing Guide for Parents and Students [and] Work Supervisor's Guide.

    ERIC Educational Resources Information Center

    Rural Student Vocational Program, Wasilla, AK.

    The purpose of the Rural Student Vocational Program (RSVP) is to provide rural high school vocational students with work and other experiences related to their career objective. Students from outlying schools travel to Anchorage, Fairbanks, or Juneau (Alaska) to participate in two weeks of work experience with cooperating agencies and businesses.…

  7. 45 CFR 2553.44 - May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false May cost reimbursements received by a RSVP... benefit payments or minimum wage laws. Cost reimbursements are not subject to garnishment, do not reduce... receive assistance from other programs? 2553.44 Section 2553.44 Public Welfare Regulations Relating to...

  8. 45 CFR 2553.44 - May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... volunteer be subject to any tax or charge, treated as wages or compensation, or affect eligibility to... VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.44 May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as wages or compensation, or...

  9. 45 CFR 2553.44 - May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... volunteer be subject to any tax or charge, treated as wages or compensation, or affect eligibility to... VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.44 May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as wages or compensation, or...

  10. 45 CFR 2553.44 - May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... volunteer be subject to any tax or charge, treated as wages or compensation, or affect eligibility to... VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.44 May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as wages or compensation, or...

  11. 45 CFR 2553.44 - May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... volunteer be subject to any tax or charge, treated as wages or compensation, or affect eligibility to... VOLUNTEER PROGRAM Eligibility, Cost Reimbursements and Volunteer Assignments § 2553.44 May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as wages or compensation, or...

  12. The cognitive neuroscience of person identification.

    PubMed

    Biederman, Irving; Shilowich, Bryan E; Herald, Sarah B; Margalit, Eshed; Maarek, Rafael; Meschke, Emily X; Hacker, Catrina M

    2018-02-14

    We compare and contrast five differences between person identification by voice and face. 1. There is little or no cost when a familiar face is to be recognized from an unrestricted set of possible faces, even at Rapid Serial Visual Presentation (RSVP) rates, but the accuracy of familiar voice recognition declines precipitously when the set of possible speakers is increased from one to a mere handful. 2. Whereas deficits in face recognition are typically perceptual in origin, those with normal perception of voices can manifest severe deficits in their identification. 3. Congenital prosopagnosics (CPros) and congenital phonagnosics (CPhon) are generally unable to imagine familiar faces and voices, respectively. Only in CPros, however, is this deficit a manifestation of a general inability to form visual images of any kind. CPhons report no deficit in imaging non-voice sounds. 4. The prevalence of CPhons of 3.2% is somewhat higher than the reported prevalence of approximately 2.0% for CPros in the population. There is evidence that CPhon represents a distinct condition statistically and not just normal variation. 5. Face and voice recognition proficiency are uncorrelated rather than reflecting limitations of a general capacity for person individuation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Sound segregation via embedded repetition is robust to inattention.

    PubMed

    Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria

    2016-03-01

    The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).

  14. Neurotechnology for intelligence analysts

    NASA Astrophysics Data System (ADS)

    Kruse, Amy A.; Boyd, Karen C.; Schulman, Joshua J.

    2006-05-01

    Geospatial Intelligence Analysts are currently faced with an enormous volume of imagery, only a fraction of which can be processed or reviewed in a timely operational manner. Computer-based target detection efforts have failed to yield the speed, flexibility and accuracy of the human visual system. Rather than focus solely on artificial systems, we hypothesize that the human visual system is still the best target detection apparatus currently in use, and with the addition of neuroscience-based measurement capabilities it can surpass the throughput of the unaided human severalfold. Using electroencephalography (EEG), Thorpe et al1 described a fast signal in the brain associated with the early detection of targets in static imagery using a Rapid Serial Visual Presentation (RSVP) paradigm. This finding suggests that it may be possible to extract target detection signals from complex imagery in real time utilizing non-invasive neurophysiological assessment tools. To transform this phenomenon into a capability for defense applications, the Defense Advanced Research Projects Agency (DARPA) currently is sponsoring an effort titled Neurotechnology for Intelligence Analysts (NIA). The vision of the NIA program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Successful development of a neurobiologically-based image triage system will enable image analysts to train more effectively and process imagery with greater speed and precision.

  15. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    In this thesis, we analyze various factors that affect quality of service (QoS) communication in high-speed, packet-switching sub-networks. We hypothesize that sub-network-wide bandwidth reservation and guaranteed CPU processing power at endpoint systems for handling data traffic are indispensable to achieving hard end-to-end quality of service. Different bandwidth reservation strategies, traffic characterization schemes, and scheduling algorithms affect the network resources and CPU usage as well as the extent that QoS can be achieved. In order to analyze those factors, we design and implement a communication layer. Our experimental analysis supports our research hypothesis. The Resource ReSerVation Protocol (RSVP) is designed to realize resource reservation. Our analysis of RSVP shows that using RSVP solely is insufficient to provide hard end-to-end quality of service in a high-speed sub-network. Analysis of the IEEE 802.lp protocol also supports the research hypothesis.

  16. Does Vertical Reading Help People with Macular Degeneration: An Exploratory Study

    PubMed Central

    Calabrèse, Aurélie; Liu, Tingting; Legge, Gordon E.

    2017-01-01

    Individuals with macular degeneration often develop a Preferred Retinal Locus (PRL) used in place of the impaired fovea. It is known that many people adopt a PRL left of the scotoma, which is likely to affect reading by occluding text to the right of fixation. For such individuals, we examined the possibility that reading vertical text, in which words are rotated 90° with respect to the normal horizontal orientation, would be beneficial for reading. Vertically oriented words would be tangential to the scotoma instead of being partially occluded by it. Here we report the results of an exploratory study that aimed at investigating this hypothesis. We trained individuals with macular degeneration who had PRLs left of their scotoma to read text rotated 90° clockwise and presented using rapid serial visual presentation (RSVP). Although training resulted in improved reading of vertical text, the training did not result in reading speeds that appreciably exceeded reading speeds following training with horizontal text. These results do not support the hypothesis that people with left PRLs read faster with vertical text. PMID:28114373

  17. Illusions of integration are subjectively impenetrable: Phenomenological experience of Lag 1 percepts during dual-target RSVP.

    PubMed

    Simione, Luca; Akyürek, Elkan G; Vastola, Valentina; Raffone, Antonino; Bowman, Howard

    2017-05-01

    We investigated the relationship between different kinds of target reports in a rapid serial visual presentation task, and their associated perceptual experience. Participants reported the identity of two targets embedded in a stream of stimuli and their associated subjective visibility. In our task, target stimuli could be combined together to form more complex ones, thus allowing participants to report temporally integrated percepts. We found that integrated percepts were associated with high subjective visibility scores, whereas reports in which the order of targets was reversed led to a poorer perceptual experience. We also found a reciprocal relationship between the chance of the second target not being reported correctly and the perceptual experience associated with the first one. Principally, our results indicate that integrated percepts are experienced as a unique, clear perceptual event, whereas order reversals are experienced as confused, similar to cases in which an entirely wrong response was given. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Large capacity temporary visual memory

    PubMed Central

    Endress, Ansgar D.; Potter, Mary C.

    2014-01-01

    Visual working memory (WM) capacity is thought to be limited to three or four items. However, many cognitive activities seem to require larger temporary memory stores. Here, we provide evidence for a temporary memory store with much larger capacity than past WM capacity estimates. Further, based on previous WM research, we show that a single factor — proactive interference — is sufficient to bring capacity estimates down to the range of previous WM capacity estimates. Participants saw a rapid serial visual presentation (RSVP) of 5 to 21 pictures of familiar objects or words presented at rates of 4/s or 8/s, respectively, and thus too fast for strategies such as rehearsal. Recognition memory was tested with a single probe item. When new items were used on all trials, no fixed memory capacities were observed, with estimates of up to 9.1 retained pictures for 21-item lists, and up to 30.0 retained pictures for 100-item lists, and no clear upper bound to how many items could be retained. Further, memory items were not stored in a temporally stable form of memory, but decayed almost completely after a few minutes. In contrast, when, as in most WM experiments, a small set of items was reused across all trials, thus creating proactive interference among items, capacity remained in the range reported in previous WM experiments. These results show that humans have a large-capacity temporary memory store in the absence of proactive interference, and raise the question of whether temporary memory in everyday cognitive processing is severely limited as in WM experiments, or has the much larger capacity found in the present experiments. PMID:23937181

  19. Subliminal Salience Search Illustrated: EEG Identity and Deception Detection on the Fringe of Awareness

    PubMed Central

    Bowman, Howard; Filetti, Marco; Janssen, Dirk; Su, Li; Alsufyani, Abdulmajeed; Wyble, Brad

    2013-01-01

    We propose a novel deception detection system based on Rapid Serial Visual Presentation (RSVP). One motivation for the new method is to present stimuli on the fringe of awareness, such that it is more difficult for deceivers to confound the deception test using countermeasures. The proposed system is able to detect identity deception (by using the first names of participants) with a 100% hit rate (at an alpha level of 0.05). To achieve this, we extended the classic Event-Related Potential (ERP) techniques (such as peak-to-peak) by applying Randomisation, a form of Monte Carlo resampling, which we used to detect deception at an individual level. In order to make the deployment of the system simple and rapid, we utilised data from three electrodes only: Fz, Cz and Pz. We then combined data from the three electrodes using Fisher's method so that each participant was assigned a single p-value, which represents the combined probability that a specific participant was being deceptive. We also present subliminal salience search as a general method to determine what participants find salient by detecting breakthrough into conscious awareness using EEG. PMID:23372697

  20. Visual temporal processing in dyslexia and the magnocellular deficit theory: the need for speed?

    PubMed

    McLean, Gregor M T; Stuart, Geoffrey W; Coltheart, Veronika; Castles, Anne

    2011-12-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore temporal aspects of magnocellular functioning in 40 children with dyslexia and 42 age-matched controls (aged 7-11). The relationship between magnocellular temporal resolution and higher-level aspects of visual temporal processing including inspection time, single and dual-target (attentional blink) RSVP performance, go/no-go reaction time, and rapid naming was also assessed. The Dyslexia group exhibited significant deficits in magnocellular temporal resolution compared with controls, but the two groups did not differ in parvocellular temporal resolution. Despite the significant group differences, associations between magnocellular temporal resolution and reading ability were relatively weak, and links between low-level temporal resolution and reading ability did not appear specific to the magnocellular system. Factor analyses revealed that a collective Perceptual Speed factor, involving both low-level and higher-level visual temporal processing measures, accounted for unique variance in reading ability independently of phonological processing, rapid naming, and general ability.

  1. The attentional blink in amblyopia.

    PubMed

    Popple, Ariella V; Levi, Dennis M

    2008-10-31

    Amblyopia is a disorder of visual acuity in one eye, thought to arise from suppression by the other eye during development of the visual cortex. In the attentional blink, the second of two targets (T2) in a Rapid Serial Visual Presentation (RSVP) stream is difficult to detect and identify when it appears shortly but not immediately after the first target (T1). We investigated the attentional blink seen through amblyopic eyes and found that it was less finely tuned in time than when the 12 amblyopic observers viewed the stimuli with their preferred eyes. T2 performance was slightly better through amblyopic eyes two frames after T1 but worse one frame after T1. Previously (A. V. Popple & D. M. Levi, 2007), we showed that when the targets were red letters in a stream of gray letters (or vice versa), normal observers frequently confused T2 with the letters before and after it (neighbor errors). Observers viewing through their amblyopic eyes made significantly fewer neighbor errors and more T2 responses consisting of letters that were never presented. In normal observers, T1 (on the rare occasions when it was reported incorrectly) was often confused with the letter immediately after it. Viewing through their amblyopic eyes, observers with amblyopia made more responses to the letter immediately before T1. These results suggest that childhood suppression of the input from amblyopic eyes disrupts attentive processing. We hypothesize reduced connectivity between monocularly tuned lower visual areas, subcortical structures that drive foveal attention, and more frontal regions of the brain responsible for letter recognition and working memory. Perhaps when viewing through their amblyopic eyes, the observers were still processing the letter identity of a prior distractor when the color flash associated with the target was detected. After T1, unfocused temporal attention may have bound together erroneously the features of succeeding letters, resulting in the appearance of letters that were not actually presented. These findings highlight the role of early (monocular) visual processes in modulating the attentional blink, as well as the role of attention in amblyopic visual deficits.

  2. Expectations impact short-term memory through changes in connectivity between attention- and task-related brain regions.

    PubMed

    Sinke, Christopher; Forkmann, Katarina; Schmidt, Katharina; Wiech, Katja; Bingel, Ulrike

    2016-05-01

    Over the recent years, neuroimaging studies have investigated the neural mechanisms underlying the influence of expectations on perception. However, it seems equally reasonable to assume that expectations impact cognitive functions. Here we used fMRI to explore the role of expectations on task performance and its underlying neural mechanisms. 43 healthy participants were randomly assigned to two groups. Using verbal instructions, group 1 was led to believe that pain enhances task performance while group 2 was instructed that pain hampers their performance. All participants performed a Rapid-Serial-Visual-Presentation (RSVP) Task (target detection and short-term memory component) with or without concomitant painful heat stimulation during 3T fMRI scanning. As hypothesized, short-term memory performance showed an interaction between painful stimulation and expectation. Positive expectations induced stronger neural activation in the right inferior parietal cortex (IPC) during painful stimulation than negative expectation. Moreover, IPC displayed differential functional coupling with the left inferior occipital cortex under pain as a function of expectancy. Our data show that an individual's expectation can influence cognitive performance in a visual short-term memory task which is associated with activity and connectivity changes in brain areas implicated in attentional processing and task performance. Copyright © 2016. Published by Elsevier Ltd.

  3. Relationship between slow visual processing and reading speed in people with macular degeneration

    PubMed Central

    Cheong, Allen MY; Legge, Gordon E; Lawrence, Mary G; Cheung, Sing-Hang; Ruff, Mary A

    2007-01-01

    Purpose People with macular degeneration (MD) often read slowly even with adequate magnification to compensate for acuity loss. Oculomotor deficits may affect reading in MD, but cannot fully explain the substantial reduction in reading speed. Central-field loss (CFL) is often a consequence of macular degeneration, necessitating the use of peripheral vision for reading. We hypothesized that slower temporal processing of visual patterns in peripheral vision is a factor contributing to slow reading performance in MD patients. Methods Fifteen subjects with MD, including 12 with CFL, and five age-matched control subjects were recruited. Maximum reading speed and critical print size were measured with RSVP (Rapid Serial Visual Presentation). Temporal processing speed was studied by measuring letter-recognition accuracy for strings of three randomly selected letters centered at fixation for a range of exposure times. Temporal threshold was defined as the exposure time yielding 80% recognition accuracy for the central letter. Results Temporal thresholds for the MD subjects ranged from 159 to 5881 ms, much longer than values for age-matched controls in central vision (13 ms, p<0.01). The mean temporal threshold for the 11 MD subjects who used eccentric fixation (1555.8 ± 1708.4 ms) was much longer than the mean temporal threshold (97.0 ms ± 34.2 ms, p<0.01) for the age-matched controls at 10° in the lower visual field. Individual temporal thresholds accounted for 30% of the variance in reading speed (p<0.05). Conclusion The significant association between increased temporal threshold for letter recognition and reduced reading speed is consistent with the hypothesis that slower visual processing of letter recognition is one of the factors limiting reading speed in MD subjects. PMID:17881032

  4. Spatial-frequency requirements for reading revisited

    PubMed Central

    Kwon, MiYoung; Legge, Gordon E.

    2012-01-01

    Blur is one of many visual factors that can limit reading in both normal and low vision. Legge et al. [Legge, G. E., Pelli, D. G., Rubin, G. S., & Schleske, M. M. (1985). Psychophysics of reading. I. Normal vision. Vision Research, 25, 239–252.] measured reading speed for text that was low-pass filtered with a range of cutoff spatial frequencies. Above 2 cycles per letter (CPL) reading speed was constant at its maximum level, but decreased rapidly for lower cutoff frequencies. It remains unknown why the critical cutoff for reading speed is near 2 CPL. The goal of the current study was to ask whether the spatial-frequency requirement for rapid reading is related to the effects of cutoff frequency on letter recognition and the size of the visual span. Visual span profiles were measured by asking subjects to recognize letters in trigrams (random strings of three letters) flashed for 150 ms at varying letter positions left and right of the fixation point. Reading speed was measured with Rapid Serial Visual Presentation (RSVP). The size of the visual span and reading speed were measured for low-pass filtered stimuli with cutoff frequencies from 0.8 to 8 CPL. Low-pass letter recognition data, obtained under similar testing conditions, were available from our previous study (Kwon & Legge, 2011). We found that the spatial-frequency requirement for reading is very similar to the spatial-frequency requirements for the size of the visual span and single letter recognition. The critical cutoff frequencies for reading speed, the size of the visual span and a contrast-invariant measure of letter recognition were all near 1.4 CPL, which is lower than the previous estimate of 2 CPL for reading speed. Although correlational in nature, these results are consistent with the hypothesis that the size of the visual span is closely linked to reading speed. PMID:22521659

  5. Lateralization of posterior alpha EEG reflects the distribution of spatial attention during saccadic reading.

    PubMed

    Kornrumpf, Benthe; Dimigen, Olaf; Sommer, Werner

    2017-06-01

    Visuospatial attention is an important mechanism in reading that governs the uptake of information from foveal and parafoveal regions of the visual field. However, the spatiotemporal dynamics of how attention is allocated during eye fixations are not completely understood. The current study explored the use of EEG alpha-band oscillations to investigate the spatial distribution of attention during reading. We reanalyzed two data sets, focusing on the lateralization of alpha activity at posterior scalp sites. In each experiment, participants read short lists of German nouns in two paradigms: either by freely moving their eyes (saccadic reading) or by fixating the screen center while the text moved passively from right to left at the same average speed (RSVP paradigm). In both paradigms, upcoming words were either visible or masked, and foveal processing load was manipulated by varying the words' lexical frequencies. Posterior alpha lateralization revealed a sustained rightward bias of attention during saccadic reading, but not in the RSVP paradigm. Interestingly, alpha lateralization was not influenced by word frequency (foveal load) or preview during the preceding fixation. Hence, alpha did not reflect transient attention shifts within a given fixation. However, in both experiments, we found that in the saccadic reading condition a stronger alpha lateralization shortly before a saccade predicted shorter fixations on the subsequently fixated word. These results indicate that alpha lateralization can serve as a measure of attention deployment and its link to oculomotor behavior in reading. © 2017 Society for Psychophysiological Research.

  6. Reading speed in the peripheral visual field of older adults: Does it benefit from perceptual learning?

    PubMed

    Yu, Deyue; Cheung, Sing-Hang; Legge, Gordon E; Chung, Susana T L

    2010-04-21

    Enhancing reading ability in peripheral vision is important for the rehabilitation of people with central-visual-field loss from age-related macular degeneration (AMD). Previous research has shown that perceptual learning, based on a trigram letter-recognition task, improved peripheral reading speed among normally-sighted young adults (Chung, Legge, & Cheung, 2004). Here we ask whether the same happens in older adults in an age range more typical of the onset of AMD. Eighteen normally-sighted subjects, aged 55-76years, were randomly assigned to training or control groups. Visual-span profiles (plots of letter-recognition accuracy as a function of horizontal letter position) and RSVP reading speeds were measured at 10 degrees above and below fixation during pre- and post-tests for all subjects. Training consisted of repeated measurements of visual-span profiles at 10 degrees below fixation, in four daily sessions. The control subjects did not receive any training. Perceptual learning enlarged the visual spans in both trained (lower) and untrained (upper) visual fields. Reading speed improved in the trained field by 60% when the trained print size was used. The training benefits for these older subjects were weaker than the training benefits for young adults found by Chung et al. Despite the weaker training benefits, perceptual learning remains a potential option for low-vision reading rehabilitation among older adults. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    PubMed Central

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685

  8. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface.

    PubMed

    Waytowich, Nicholas R; Lawhern, Vernon J; Bohannon, Addison W; Ball, Kenneth R; Lance, Brent J

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.

  9. Improved Neural Signal Classification in a Rapid Serial Visual Presentation Task Using Active Learning.

    PubMed

    Marathe, Amar R; Lawhern, Vernon J; Wu, Dongrui; Slayback, David; Lance, Brent J

    2016-03-01

    The application space for brain-computer interface (BCI) technologies is rapidly expanding with improvements in technology. However, most real-time BCIs require extensive individualized calibration prior to use, and systems often have to be recalibrated to account for changes in the neural signals due to a variety of factors including changes in human state, the surrounding environment, and task conditions. Novel approaches to reduce calibration time or effort will dramatically improve the usability of BCI systems. Active Learning (AL) is an iterative semi-supervised learning technique for learning in situations in which data may be abundant, but labels for the data are difficult or expensive to obtain. In this paper, we apply AL to a simulated BCI system for target identification using data from a rapid serial visual presentation (RSVP) paradigm to minimize the amount of training samples needed to initially calibrate a neural classifier. Our results show AL can produce similar overall classification accuracy with significantly less labeled data (in some cases less than 20%) when compared to alternative calibration approaches. In fact, AL classification performance matches performance of 10-fold cross-validation (CV) in over 70% of subjects when training with less than 50% of the data. To our knowledge, this is the first work to demonstrate the use of AL for offline electroencephalography (EEG) calibration in a simulated BCI paradigm. While AL itself is not often amenable for use in real-time systems, this work opens the door to alternative AL-like systems that are more amenable for BCI applications and thus enables future efforts for developing highly adaptive BCI systems.

  10. Training improves reading speed in peripheral vision: is it due to attention?

    PubMed

    Lee, Hye-Won; Kwon, Miyoung; Legge, Gordon E; Gefroh, Joshua J

    2010-06-01

    Previous research has shown that perceptual training in peripheral vision, using a letter-recognition task, increases reading speed and letter recognition (S. T. L. Chung, G. E. Legge, & S. H. Cheung, 2004). We tested the hypothesis that enhanced deployment of spatial attention to peripheral vision explains this training effect. Subjects were pre- and post-tested with 3 tasks at 10° above and below fixation-RSVP reading speed, trigram letter recognition (used to construct visual-span profiles), and deployment of spatial attention (measured as the benefit of a pre-cue for target position in a lexical-decision task). Groups of five normally sighted young adults received 4 days of trigram letter-recognition training in upper or lower visual fields, or central vision. A control group received no training. Our measure of deployment of spatial attention revealed visual-field anisotropies; better deployment of attention in the lower field than the upper, and in the lower-right quadrant compared with the other three quadrants. All subject groups exhibited slight improvement in deployment of spatial attention to peripheral vision in the post-test, but this improvement was not correlated with training-related increases in reading speed and the size of visual-span profiles. Our results indicate that improved deployment of spatial attention to peripheral vision does not account for improved reading speed and letter recognition in peripheral vision.

  11. On the Relationship Between Attention Processing and P300-Based Brain Computer Interface Control in Amyotrophic Lateral Sclerosis

    PubMed Central

    Riccio, Angela; Schettini, Francesca; Simione, Luca; Pizzimenti, Alessia; Inghilleri, Maurizio; Olivetti-Belardinelli, Marta; Mattia, Donatella; Cincotti, Febo

    2018-01-01

    Our objective was to investigate the capacity to control a P3-based brain-computer interface (BCI) device for communication and its related (temporal) attention processing in a sample of amyotrophic lateral sclerosis (ALS) patients with respect to healthy subjects. The ultimate goal was to corroborate the role of cognitive mechanisms in event-related potential (ERP)-based BCI control in ALS patients. Furthermore, the possible differences in such attentional mechanisms between the two groups were investigated in order to unveil possible alterations associated with the ALS condition. Thirteen ALS patients and 13 healthy volunteers matched for age and years of education underwent a P3-speller BCI task and a rapid serial visual presentation (RSVP) task. The RSVP task was performed by participants in order to screen their temporal pattern of attentional resource allocation, namely: (i) the temporal attentional filtering capacity (scored as T1%); and (ii) the capability to adequately update the attentive filter in the temporal dynamics of the attentional selection (scored as T2%). For the P3-speller BCI task, the online accuracy and information transfer rate (ITR) were obtained. Centroid Latency and Mean Amplitude of N200 and P300 were also obtained. No significant differences emerged between ALS patients and Controls with regards to online accuracy (p = 0.13). Differently, the performance in controlling the P3-speller expressed as ITR values (calculated offline) were compromised in ALS patients (p < 0.05), with a delay in the latency of P3 when processing BCI stimuli as compared with Control group (p < 0.01). Furthermore, the temporal aspect of attentional filtering which was related to BCI control (r = 0.51; p < 0.05) and to the P3 wave amplitude (r = 0.63; p < 0.05) was also altered in ALS patients (p = 0.01). These findings ground the knowledge required to develop sensible classes of BCI specifically designed by taking into account the influence of the cognitive characteristics of the possible candidates in need of a BCI system for communication. PMID:29892218

  12. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    DOE PAGES

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; ...

    2016-09-22

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less

  13. Perceptual and Cognitive Factors Imposing “Speed Limits” on Reading Rate: A Study with the Rapid Serial Visual Presentation

    PubMed Central

    Spinelli, Donatella; Zoccolotti, Pierluigi; De Luca, Maria; Martelli, Marialuisa

    2016-01-01

    Adults read at high speed, but estimates of their reading rate vary greatly, i.e., from 100 to 1500 words per minute (wpm). This discrepancy is likely due to different recording methods and to the different perceptual and cognitive processes involved in specific test conditions. The present study investigated the origins of these notable differences in RSVP reading rate (RR). In six experiments we investigated the role of many different perceptual and cognitive variables. The presence of a mask caused a steep decline in reading rate, with an estimated masking cost of about 200 wpm. When the decoding process was isolated, RR approached values of 1200 wpm. When the number of stimuli exceeded the short-term memory span, RR decreased to 800 wpm. The semantic context contributed to reading speed only by a factor of 1.4. Finally, eye movements imposed an upper limit on RR (around 300 wpm). Overall, data indicate a speed limit of 300 wpm, which corresponds to the time needed for eye movement execution, i.e., the most time consuming mechanism. Results reconcile differences in reading rates reported by different laboratories and thus provide suggestions for targeting different components of reading rate. PMID:27088226

  14. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less

  15. Merging Air Quality and Public Health Decision Support Systems

    NASA Astrophysics Data System (ADS)

    Hudspeth, W. B.; Bales, C. L.

    2003-12-01

    The New Mexico Air Quality Mapper (NMAQM) is a Web-based, open source GIS prototype application that Earth Data Analysis Center is developing under a NASA Cooperative Agreement. NMAQM enhances and extends existing data and imagery delivery systems with an existing Public Health system called the Rapid Syndrome Validation Project (RSVP). RSVP is a decision support system operating in several medical and public health arenas. It is evolving to ingest remote sensing data as input to provide early warning of human health threats, especially those related to anthropogenic atmospheric pollutants and airborne pathogens. The NMAQM project applies measurements of these atmospheric pollutants, derived from both remotely sensed data as well as from in-situ air quality networks, to both forecasting and retrospective analyses that influence human respiratory health. NMAQM provides a user-friendly interface for visualizing and interpreting environmentally-linked epidemiological phenomena. The results, and the systems made to provide the information, will be applicable not only to decision-makers in the public health realm, but also to air quality organizations, demographers, community planners, and other professionals in information technology, and social and engineering sciences. As an accessible and interactive mapping and analysis application, it allows environment and health personnel to study historic data for hypothesis generation and trend analysis, and then, potentially, to predict air quality conditions from daily data acquisitions. Additional spin off benefits to such users include the identification of gaps in the distribution of in-situ monitoring stations, the dissemination of air quality data to the public, and the discrimination of local vs. more regional sources of air pollutants that may bear on decisions relating to public health and public policy.

  16. Attentional capture and engagement during the attentional blink: A "camera" metaphor of attention.

    PubMed

    Zivony, Alon; Lamy, Dominique

    2016-11-01

    Identification of a target is impaired when it follows a previous target within 500 ms, suggesting that our attentional system suffers from severe temporal limitations. Although control-disruption theories posit that such impairment, known as the attentional blink (AB), reflects a difficulty in matching incoming information with the current attentional set, disrupted-engagement theories propose that it reflects a delay in later processes leading to transient enhancement of potential targets. Here, we used a variant of the contingent-capture rapid serial visual presentation (RSVP) paradigm (Folk, Ester, & Troemel, 2009) to adjudicate these competing accounts. Our results show that a salient distractor that shares the target color captures attention to the same extent whether it appears within or outside the blink, thereby invalidating the notion that control over the attentional set is compromised during the blink. In addition, our results show that during the blink, not the attention-capturing object itself but the item immediately following it, is selected, indicating that the AB manifests as a delay between attentional capture and attentional engagement. We therefore conclude that attentional capture and attentional engagement can be dissociated as separate stages of attentional selection. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. The roles of selective attention and desensitization in the association between video gameplay and aggression: An ERP investigation.

    PubMed

    Jabr, Mejdy M; Denke, Greg; Rawls, Eric; Lamm, Connie

    2018-04-01

    A number of studies have indicated that violent video gameplay is associated with higher levels of aggression and that desensitization and selective attention to violent content may contribute to this association. Utilizing an emotionally-charged rapid serial visual presentation (RSVP) task, the current study used two event-related potentials (ERPs) - the N1 and P3 - that have been associated with selective attention and desensitization as neurocognitive mechanisms potentially underlying the connection between gameplay and higher levels of aggression. Results indicated that video game players and non-players differed in N1 and P3 activation when engaged with emotionally-charged imagery. Additionally, P3 amplitudes moderated the association between video gameplay and aggression, indicating that players who display small P3 amplitudes also showed heightened levels of aggression. Follow-up moderational analyses revealed that individuals who play games for many hours and show more negative N1 amplitudes show smaller P3 activation. Together, our results suggest that selective attention to violent content and desensitization both play key roles in the association between video gameplay and aggression. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Word and face processing engage overlapping distributed networks: Evidence from RSVP and EEG investigations.

    PubMed

    Robinson, Amanda K; Plaut, David C; Behrmann, Marlene

    2017-07-01

    Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed; that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Words with and without internal structure: what determines the nature of orthographic and morphological processing?

    PubMed Central

    Velan, Hadas; Frost, Ram

    2010-01-01

    Recent studies suggest that basic effects which are markers of visual word recognition in Indo-European languages cannot be obtained in Hebrew or in Arabic. Although Hebrew has an alphabetic writing system, just like English, French, or Spanish, a series of studies consistently suggested that simple form-orthographic priming, or letter-transposition priming are not found in Hebrew. In four experiments, we tested the hypothesis that this is due to the fact that Semitic words have an underlying structure that constrains the possible alignment of phonemes and their respective letters. The experiments contrasted typical Semitic words which are root-derived, with Hebrew words of non-Semitic origin, which are morphologically simple and resemble base words in European languages. Using RSVP, TL priming, and form-priming manipulations, we show that Hebrew readers process Hebrew words which are morphologically simple similar to the way they process English words. These words indeed reveal the typical form-priming and TL priming effects reported in European languages. In contrast, words with internal structure are processed differently, and require a different code for lexical access. We discuss the implications of these findings for current models of visual word recognition. PMID:21163472

  20. Neural Correlates of Emotion Processing in Word Detection Task

    PubMed Central

    Zhao, Wenshuang; Chen, Liang; Zhou, Chunxia; Luo, Wenbo

    2018-01-01

    In our previous study, we have proposed a three-stage model of emotion processing; in the current study, we investigated whether the ERP component may be different when the emotional content of stimuli is task-irrelevant. In this study, a dual-target rapid serial visual presentation (RSVP) task was used to investigate how the emotional content of words modulates the time course of neural dynamics. Participants performed the task in which affectively positive, negative, and neutral adjectives were rapidly presented while event-related potentials (ERPs) were recorded from 18 undergraduates. The N170 component was enhanced for negative words relative to positive and neutral words. This indicates that automatic processing of negative information occurred at an early perceptual processing stage. In addition, later brain potentials such as the late positive potential (LPP) were only enhanced for positive words in the 480–580-ms post-stimulus window, while a relatively large amplitude signal was elicited by positive and negative words between 580 and 680 ms. These results indicate that different types of emotional content are processed distinctly at different time windows of the LPP, which is in contrast with the results of studies on task-relevant emotional processing. More generally, these findings suggest that a negativity bias to negative words remains to be found in emotion-irrelevant tasks, and that the LPP component reflects dynamic separation of emotion valence. PMID:29887824

  1. How humans search for targets through time: A review of data and theory from the attentional blink

    PubMed Central

    Dux, Paul E.; Marois, Réne

    2009-01-01

    Under conditions of rapid serial visual presentation (RSVP), subjects display a reduced ability to report the second of two targets (Target 2; T2) in a stream of distractors if it appears within 200–500 ms of Target 1 (T1). This effect, known as the attentional blink (AB), has been central in characterizing the limits of humans’ ability to consciously perceive stimuli distributed across time. Here we review theoretical accounts of the AB and examine how they explain key findings in the literature. We conclude that the AB arises from attentional demands of T1 for selection, working memory encoding, episodic registration and response selection, which prevents this high-level central resource from being applied to T2 at short T1–T2 lags. T1 processing also transiently impairs the re-deployment of these attentional resources to subsequent targets, and the inhibition of distractors that appear in close temporal proximity to T2. While these findings are consistent with a multi-factorial account of the AB, they can also be largely explained by assuming that the activation of these multiple processes depend on a common capacity-limited attentional process to select behaviorally relevant events presented amongst temporally distributed distractors. Thus, at its core, the attentional blink may ultimately reveal the temporal limits of the deployment of selective attention. PMID:19933555

  2. Can attentional control settings be maintained for two color-location conjunctions? Evidence from an RSVP task.

    PubMed

    Irons, Jessica L; Remington, Roger W

    2013-07-01

    Previous investigations of the ability to maintain separate attentional control settings for different spatial locations have relied principally on a go/no-go spatial-cueing paradigm. The results have suggested that control of attention is accomplished only late in processing. However, the go/no-go task does not provide strong incentives to withhold attention from irrelevant color-location conjunctions. We used a modified version of the task in which failing to adopt multiple control settings would be detrimental to performance. Two RSVP streams of colored letters appeared to the left and right of fixation. Participants searched for targets that were a conjunction of color and location, so that the target color for one stream acted as a distractor when presented in the opposite stream. Distractors that did not match the target conjunctions nevertheless captured attention and interfered with performance. This was the case even when the target conjunctions were previewed early in the trial prior to the target (Exp. 2). However, distractor interference was reduced when the upcoming distractor was previewed early on in the trial (Exp. 3). Attentional selection of targets by color-location conjunctions may be effective if facilitative attentional sets are accompanied by the top-down inhibition of irrelevant items.

  3. Painful faces-induced attentional blink modulated by top–down and bottom–up mechanisms

    PubMed Central

    Zheng, Chun; Wang, Jin-Yan; Luo, Fei

    2015-01-01

    Pain-related stimuli can capture attention in an automatic (bottom–up) or intentional (top–down) fashion. Previous studies have examined attentional capture by pain-related information using spatial attention paradigms that involve mainly a bottom–up mechanism. In the current study, we investigated the pain information-induced attentional blink (AB) using a rapid serial visual presentation (RSVP) task, and compared the effects of task-irrelevant and task-relevant pain distractors. Relationships between accuracy of target identification and individual traits (i.e., empathy and catastrophizing thinking about pain) were also examined. The results demonstrated that task-relevant painful faces had a significant pain information-induced AB effect, whereas task-irrelevant faces showed a near-significant trend of this effect, supporting the notion that pain-related stimuli can influence the temporal dynamics of attention. Furthermore, we found a significant negative correlation between response accuracy and pain catastrophizing score in task-relevant trials. These findings suggest that active scanning of environmental information related to pain produces greater deficits in cognition than does unintentional attention toward pain, which may represent the different ways in which healthy individuals and patients with chronic pain process pain-relevant information. These results may provide insight into the understanding of maladaptive attentional processing in patients with chronic pain. PMID:26082731

  4. 45 CFR 2552.12 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... station that identifies project requirements, working relationships and mutual responsibilities. (m) National Senior Service Corps (NSSC). The collective name for the Foster Grandparent Program (FGP), the Retired and Senior Volunteer Program (RSVP), the Senior Companion Program (SCP), and Demonstration...

  5. Renewable Energy for Rural Health Clinics (Energia Removable para Centros de Salud Rurales)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, A. C.; Olson, K.

    This guide provides a broad understanding of the technical, social, and organizational aspects of health clinic electrification, especially through the use of renewable energy sources. It is intended to be used primarily by decision makers within governments or private agencies to accurately assess their health clinic's needs, select appropriate and cost-effective technologies to meet those needs, and to put into place effective infrastructure to install and maintain the hardware. This is the first in a series of rural applications guidebooks that the National Renewable Energy Laboratory (NREL) Village Power Program is commissioning to couple commercial renewable systems with rural applications.more » The guidebooks are complemented by NREL's Village Power Program's development activities, international pilot projects, and visiting professionals program. For more information on the NREL Village Power Program, visit the Renewables for Sustainable Village Power web site at http://www.rsvp.nrel .gov/rsvp/.« less

  6. Fearful, but not angry, expressions diffuse attention to peripheral targets in an attentional blink paradigm.

    PubMed

    Taylor, James M; Whalen, Paul J

    2014-06-01

    We previously demonstrated that fearful facial expressions implicitly facilitate memory for contextual events whereas angry facial expressions do not. The current study sought to more directly address the implicit effect of fearful expressions on attention for contextual events within a classic attentional paradigm (i.e., the attentional blink) in which memory is tested on a trial-by-trial basis, thereby providing subjects with a clear, explicit attentional strategy. Neutral faces of a single gender were presented via rapid serial visual presentation (RSVP) while bordered by four gray pound signs. Participants were told to watch for a gender change within the sequence (T1). It is critical to note that the T1 face displayed a neutral, fearful, or angry expression. Subjects were then told to detect a color change (i.e., gray to green; T2) at one of the four peripheral pound sign locations appearing after T1. This T2 color change could appear at one of six temporal positions. Complementing previous attentional blink paradigms, participants were told to respond via button press immediately when a T2 target was detected. We found that, compared with the neutral T1 faces, fearful faces significantly increased target detection ability at four of the six temporal locations (all ps < .05) whereas angry expressions did not. The results of this study demonstrate that fearful facial expressions can uniquely and implicitly enhance environmental monitoring above and beyond explicit attentional effects related to task instructions.

  7. Oxytocin enhances attentional bias for neutral and positive expression faces in individuals with higher autistic traits.

    PubMed

    Xu, Lei; Ma, Xiaole; Zhao, Weihua; Luo, Lizhu; Yao, Shuxia; Kendrick, Keith M

    2015-12-01

    There is considerable interest in the potential therapeutic role of the neuropeptide oxytocin in altering attentional bias towards emotional social stimuli in psychiatric disorders. However, it is still unclear whether oxytocin primarily influences attention towards positive or negative valence social stimuli. Here in a double-blind, placebo controlled, between subject design experiment in 60 healthy male subjects we have used the highly sensitive dual-target rapid serial visual presentation (RSVP) paradigm to investigate whether intranasal oxytocin (40IU) treatment alters attentional bias for emotional faces. Results show that oxytocin improved recognition accuracy of neutral and happy expression faces presented in the second target position (T2) during the period of reduced attentional capacity following prior presentation of a first neutral face target (T1), but had no effect on recognition of negative expression faces (angry, fearful, sad). Oxytocin also had no effect on recognition of non-social stimuli (digits) in this task. Recognition accuracy for neutral faces at T2 was negatively associated with autism spectrum quotient (ASQ) scores in the placebo group, and oxytocin's facilitatory effects were restricted to a sub-group of subjects with higher ASQ scores. Our results therefore indicate that oxytocin primarily enhances the allocation of attentional resources towards faces expressing neutral or positive emotion and does not influence that towards negative emotion ones or non-social stimuli. This effect of oxytocin is strongest in healthy individuals with higher autistic trait scores, thereby providing further support for its potential therapeutic use in autism spectrum disorder. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. 45 CFR 2553.12 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... THE RETIRED AND SENIOR VOLUNTEER PROGRAM General § 2553.12 Definitions. (a) Act. The Domestic... by the RSVP project sponsor and the volunteer station that identifies project requirements, working relationships and mutual responsibilities. (i) National Senior Service Corps (NSSC). The collective name for the...

  9. 77 FR 19661 - City of Broken Bow, OK; Notice of Technical Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ... for the Broken Bow Re-Regulation Dam Hydroelectric Project No. 12470. This conference will be held on... Liberty at (202) 502-6862 or [email protected]gov by April 5, 2012, to RSVP. Nathaniel J. Davis, Sr...

  10. Out of the corner of my eye: Foveal semantic load modulates parafoveal processing in reading.

    PubMed

    Payne, Brennan R; Stites, Mallory C; Federmeier, Kara D

    2016-11-01

    In 2 experiments, we examined the impact of foveal semantic expectancy and congruity on parafoveal word processing during reading. Experiment 1 utilized an eye-tracking gaze-contingent display change paradigm, and Experiment 2 measured event-related brain potentials (ERPs) in a modified flanker rapid serial visual presentation (RSVP) paradigm. Eye-tracking and ERP data converged to reveal graded effects of foveal load on parafoveal processing. In Experiment 1, when word n was highly expected, and thus foveal load was low, there was a large parafoveal preview benefit to word n + 1. When word n was unexpected but still plausible, preview benefits to n + 1 were reduced in magnitude, and when word n was semantically incongruent, the preview benefit to n + 1 was unreliable in early pass measures. In Experiment 2, ERPs indicated that when word n was expected, and thus foveal load was low, readers successfully discriminated between valid and orthographically invalid previews during parafoveal perception. However, when word n was unexpected, parafoveal processing of n + 1 was reduced, and it was eliminated when word n was semantically incongruent. Taken together, these findings suggest that sentential context modulates the allocation of attention in the parafovea, such that covert allocation of attention to parafoveal processing is disrupted when foveal words are inconsistent with expectations based on various contextual constraints. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Real-Time Multimedia on the Internet: What Will It Take?

    ERIC Educational Resources Information Center

    Sodergren, Mike

    1998-01-01

    Considers the requirements for real-time, interactive multimedia over the Internet. Topics include demand for interactivity; new pricing models for Internet service; knowledgeable suppliers; consumer education on standards; enhanced infrastructure, including bandwidth; and new technology, including RSVP, and end-to-end Internet-working protocol.…

  12. 76 FR 20243 - Retired and Senior Volunteer Program Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-12

    ... CORPORATION FOR NATIONAL AND COMMUNITY SERVICE 45 CFR Part 2553 RIN 3045-AA52 Retired and Senior... competitive grantmaking process for the Retired and Senior Volunteer Program (RSVP). The proposed rule... with expertise in senior service and aging, site inspections, as appropriate, and evaluations of...

  13. 77 FR 63300 - Ultra-Deepwater Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-16

    ...-deepwater architecture and technology to the Secretary of Energy and provide comments and recommendations... business. Individuals who would like to attend must RSVP by email at: [email protected] no later... your request for an oral statement at least three business days prior to the meeting, and reasonable...

  14. 30 CFR 203.86 - What is in a G&G report?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... establishing reservoir porosity or labeled points showing values used in calculating reservoir porosity such as... BOE) and oil fraction for your field computed by the resource module of our RSVP model; (2) A description of anticipated hydrocarbon quality (i.e., specific gravity); and (3) The ranges within the...

  15. 30 CFR 203.86 - What is in a G&G report?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... establishing reservoir porosity or labeled points showing values used in calculating reservoir porosity such as... BOE) and oil fraction for your field computed by the resource module of our RSVP model; (2) A description of anticipated hydrocarbon quality (i.e., specific gravity); and (3) The ranges within the...

  16. 30 CFR 203.86 - What is in a G&G report?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... establishing reservoir porosity or labeled points showing values used in calculating reservoir porosity such as... BOE) and oil fraction for your field computed by the resource module of our RSVP model; (2) A description of anticipated hydrocarbon quality (i.e., specific gravity); and (3) The ranges within the...

  17. 75 FR 65595 - Retired and Senior Volunteer Program Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-26

    ... CORPORATION FOR NATIONAL AND COMMUNITY SERVICE 45 CFR Part 2553 RIN 3045-AA52 Retired and Senior... recipients for the Retired and Senior Volunteer Program (``RSVP'') beginning in fiscal year 2013. Section 201... Senior Volunteer Program. The competitive process, as directed by statute, will include the use of peer...

  18. 45 CFR 2553.23 - What are a sponsor's program responsibilities?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the project service area. (g) Conduct an annual assessment of the accomplishments and impact of the... RSVP resources to have a positive impact on critical human and social needs within the project service area. (b) Assess in collaboration with other community organizations or utilize existing assessments of...

  19. 45 CFR 2553.23 - What are a sponsor's program responsibilities?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the project service area. (g) Conduct an annual assessment of the accomplishments and impact of the... RSVP resources to have a positive impact on critical human and social needs within the project service area. (b) Assess in collaboration with other community organizations or utilize existing assessments of...

  20. 45 CFR 2553.23 - What are a sponsor's program responsibilities?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the project service area. (g) Conduct an annual assessment of the accomplishments and impact of the... RSVP resources to have a positive impact on critical human and social needs within the project service area. (b) Assess in collaboration with other community organizations or utilize existing assessments of...

  1. 45 CFR 2553.23 - What are a sponsor's program responsibilities?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the project service area. (g) Conduct an annual assessment of the accomplishments and impact of the... RSVP resources to have a positive impact on critical human and social needs within the project service area. (b) Assess in collaboration with other community organizations or utilize existing assessments of...

  2. 75 FR 2864 - National Biodefense Science Board: Notification of Public Teleconference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-19

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Biodefense Science Board: Notification of Public...) is hereby giving notice that the National Biodefense Science Board (NBSB) will hold a teleconference... wish to participate in the public comment session should e-mail [email protected]GOV to RSVP. DATES: The...

  3. 78 FR 36533 - Information Collection; Submission for OMB Review, Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-18

    ... appropriate next step for RSVP projects. The data submitted at the mid-point each year will also allow CNCS to..., electronic, mechanical, or other technological collection techniques or other forms of information technology... project management responsibilities. Four of the 19 comments specifically noted that grantee time is...

  4. 78 FR 70931 - Ultra-Deepwater Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... ultra-deepwater architecture and technology to the Secretary of Energy and provide comments and... business. Individuals who would like to attend must RSVP to [email protected] no later than 5:00 p... an oral statement at least three business days prior to the meeting, and reasonable provisions will...

  5. 78 FR 53741 - Ultra-Deepwater Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-30

    ... architecture and technology to the Secretary of Energy and provide comments and recommendations and priorities... meeting for the orderly conduct of business. Individuals who would like to attend must RSVP by email to... telephone number listed above. You must make your request for an oral statement at least three business days...

  6. 78 FR 58292 - Ultra-Deepwater Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-23

    ...-deepwater architecture and technology to the Secretary of Energy and provide comments and recommendations... the orderly conduct of business. Individuals who would like to attend must RSVP by email to: Ultra... number listed above. You must make your request for an oral statement at least three business days prior...

  7. Implied reading direction and prioritization of letter encoding.

    PubMed

    Holcombe, Alex O; Nguyen, Elizabeth H L; Goodbourn, Patrick T

    2017-10-01

    Capacity limits hinder processing of multiple stimuli, contributing to poorer performance for identifying two briefly presented letters than for identifying a single letter. Higher accuracy is typically found for identifying the letter on the left, which has been attributed to a right-hemisphere dominance for selective attention. Here, we use rapid serial visual presentation (RSVP) of letters in two locations at once. The letters to be identified are simultaneous and cued by rings. In the first experiment, we manipulated implied reading direction by rotating or mirror-reversing the letters to face to the left rather than to the right. The left-side performance advantage was eliminated. In the second experiment, letters were positioned above and below fixation, oriented such that they appeared to face downward (90° clockwise rotation) or upward (90° counterclockwise rotation). Again consistent with an effect of implied reading direction, performance was better for the top position in the downward condition, but not in the upward condition. In both experiments, mixture modeling of participants' report errors revealed that attentional sampling from the two locations was approximately simultaneous, ruling out the theory that the letter on one side was processed first, followed by a shift of attention to sample the other letter. Thus, the orientation of the letters apparently controls not when the letters are sampled from the scene, but rather the dynamics of a subsequent process, such as tokenization or memory consolidation. Implied reading direction appears to determine the letter prioritized at a high-level processing bottleneck. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. 45 CFR 2553.11 - What is the Retired and Senior Volunteer Program?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false What is the Retired and Senior Volunteer Program... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM General § 2553.11 What is the Retired and Senior Volunteer Program? The Retired and Senior Volunteer Program (RSVP) provides...

  9. 45 CFR 2553.11 - What is the Retired and Senior Volunteer Program?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 4 2013-10-01 2013-10-01 false What is the Retired and Senior Volunteer Program... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM General § 2553.11 What is the Retired and Senior Volunteer Program? The Retired and Senior Volunteer Program (RSVP) provides...

  10. 45 CFR 2553.11 - What is the Retired and Senior Volunteer Program?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 4 2014-10-01 2014-10-01 false What is the Retired and Senior Volunteer Program... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM General § 2553.11 What is the Retired and Senior Volunteer Program? The Retired and Senior Volunteer Program (RSVP) provides...

  11. 45 CFR 2553.11 - What is the Retired and Senior Volunteer Program?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 4 2012-10-01 2012-10-01 false What is the Retired and Senior Volunteer Program... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM General § 2553.11 What is the Retired and Senior Volunteer Program? The Retired and Senior Volunteer Program (RSVP) provides...

  12. 45 CFR 2553.11 - What is the Retired and Senior Volunteer Program?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false What is the Retired and Senior Volunteer Program... FOR NATIONAL AND COMMUNITY SERVICE THE RETIRED AND SENIOR VOLUNTEER PROGRAM General § 2553.11 What is the Retired and Senior Volunteer Program? The Retired and Senior Volunteer Program (RSVP) provides...

  13. 77 FR 60922 - Criminal History Check Requirements for AmeriCorps State/National, Senior Companions, Foster...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ..., including RSVP, LSA, Non-profit Capacity Building, and the Social Innovation Fund (SIF) grant programs... programs including Campuses of Service, Serve America Fellows, Encore Fellows, Silver Scholars, the Social Innovation Fund, and activities funded under programs such as the Volunteer Generation Fund. The final rule...

  14. Quantifiers More or Less Quantify On-Line: ERP Evidence for Partial Incremental Interpretation

    ERIC Educational Resources Information Center

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge ("Farmers grow crops/worms as their primary source of income"), Experiment 1 found larger N400s for atypical ("worms") than typical objects…

  15. The Transgenic RNAi Project at Harvard Medical School: Resources and Validation

    PubMed Central

    Perkins, Lizabeth A.; Holderbaum, Laura; Tao, Rong; Hu, Yanhui; Sopko, Richelle; McCall, Kim; Yang-Zhou, Donghui; Flockhart, Ian; Binari, Richard; Shim, Hye-Seok; Miller, Audrey; Housden, Amy; Foos, Marianna; Randkelv, Sakara; Kelley, Colleen; Namgyal, Pema; Villalta, Christians; Liu, Lu-Ping; Jiang, Xia; Huan-Huan, Qiao; Wang, Xia; Fujiyama, Asao; Toyoda, Atsushi; Ayers, Kathleen; Blum, Allison; Czech, Benjamin; Neumuller, Ralph; Yan, Dong; Cavallaro, Amanda; Hibbard, Karen; Hall, Don; Cooley, Lynn; Hannon, Gregory J.; Lehmann, Ruth; Parks, Annette; Mohr, Stephanie E.; Ueda, Ryu; Kondo, Shu; Ni, Jian-Quan; Perrimon, Norbert

    2015-01-01

    To facilitate large-scale functional studies in Drosophila, the Drosophila Transgenic RNAi Project (TRiP) at Harvard Medical School (HMS) was established along with several goals: developing efficient vectors for RNAi that work in all tissues, generating a genome-scale collection of RNAi stocks with input from the community, distributing the lines as they are generated through existing stock centers, validating as many lines as possible using RT–qPCR and phenotypic analyses, and developing tools and web resources for identifying RNAi lines and retrieving existing information on their quality. With these goals in mind, here we describe in detail the various tools we developed and the status of the collection, which is currently composed of 11,491 lines and covering 71% of Drosophila genes. Data on the characterization of the lines either by RT–qPCR or phenotype is available on a dedicated website, the RNAi Stock Validation and Phenotypes Project (RSVP, http://www.flyrnai.org/RSVP.html), and stocks are available from three stock centers, the Bloomington Drosophila Stock Center (United States), National Institute of Genetics (Japan), and TsingHua Fly Center (China). PMID:26320097

  16. Multi-scale Modeling of Power Plant Plume Emissions and Comparisons with Observations

    NASA Astrophysics Data System (ADS)

    Costigan, K. R.; Lee, S.; Reisner, J.; Dubey, M. K.; Love, S. P.; Henderson, B. G.; Chylek, P.

    2011-12-01

    The Remote Sensing Verification Project (RSVP) test-bed located in the Four Corners region of Arizona, Utah, Colorado, and New Mexico offers a unique opportunity to develop new approaches for estimating emissions of CO2. Two major power plants located in this area produce very large signals of co-emitted CO2 and NO2 in this rural region. In addition to the Environmental Protection Agency (EPA) maintaining Continuous Emissions Monitoring Systems (CEMS) on each of the power plant stacks, the RSVP program has deployed an array of in-situ and remote sensing instruments, which provide both point and integrated measurements. To aid in the synthesis and interpretation of the measurements, a multi-scale atmospheric modeling approach is implemented, using two atmospheric numerical models: the Weather Research and Forecasting Model with chemistry (WRF-Chem; Grell et al., 2005) and the HIGRAD model (Reisner et al., 2003). The high fidelity HIGRAD model incorporates a multi-phase Lagrangian particle based approach to track individual chemical species of stack plumes at ultra-high resolution, using an adaptive mesh. It is particularly suited to model buoyancy effects and entrainment processes at the edges of the power plant plumes. WRF-Chem is a community model that has been applied to a number of air quality problems and offers several physical and chemical schemes that can be used to model the transport and chemical transformation of the anthropogenic plumes out of the local region. Multiple nested grids employed in this study allow the model to incorporate atmospheric variability ranging from synoptic scales to micro-scales (~200 m), while including locally developed flows influenced by the nearby complex terrain of the San Juan Mountains. The simulated local atmospheric dynamics are provided to force the HIGRAD model, which links mesoscale atmospheric variability to the small-scale simulation of the power plant plumes. We will discuss how these two models are applied and integrated for the study and we will include the incorporation of the real-time CEMS measurements for input into the models. We will compare the model simulations to the RSVP in-situ, column, and satellite measurements for selected periods. More information on the RSVP Fourier Transform Spectrometer (FTS) measurements can be found at https://tccon-wiki.caltech.edu/Sites/Four_Corners . Grell, G.A., S.E. Peckham, R. Schmitz, S.A. McKeen, G. Frost, W.C. Skamarock and B. Eder, 2005: Fully coupled online chemistry within the WRF model. Atmos. Environ., 39, 6957-6975. Reisner, J., A. Wyszogrodzki, V. Mousseau, and D. Knoll, 2003: An efficient physics-based preconditioner of the fully implicit solution of small-scale thermally driven atmospheric flows. J Comput. Physics., 189, 30-44.

  17. 77 FR 68780 - Meeting Notice for the President's Advisory Council on Faith-Based and Neighborhood Partnerships

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-16

    ... event. Please RSVP to Ben O'Dell at [email protected] . Status: Open to the public, limited only by...: Please contact Ben O'Dell for any additional information about the President's Advisory Council meeting...: November 9, 2012. Ben O'Dell, Designated Federal Officer and Associate Director, HHS Center for Faith-based...

  18. 30 CFR 203.4 - How do the provisions in this part apply to different types of leases and projects?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... justification report (prescribed format) X (3) Economic viability and relief justification report (Royalty Suspension Viability Program (RSVP) model inputs justified with Geological and Geophysical (G&G), Engineering... template) (6) Determined to be economic only with relief X X X (d) The following table indicates by an X...

  19. 30 CFR 203.4 - How do the provisions in this part apply to different types of leases and projects?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... justification report (prescribed format) X (3) Economic viability and relief justification report (Royalty Suspension Viability Program (RSVP) model inputs justified with Geological and Geophysical (G&G), Engineering...) (6) Determined to be economic only with relief X X X (d) The following table indicates by an X, and...

  20. 30 CFR 203.4 - How do the provisions in this part apply to different types of leases and projects?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... justification report (prescribed format) X (3) Economic viability and relief justification report (Royalty Suspension Viability Program (RSVP) model inputs justified with Geological and Geophysical (G&G), Engineering...) (6) Determined to be economic only with relief X X X (d) The following table indicates by an X, and...

  1. 30 CFR 203.4 - How do the provisions in this part apply to different types of leases and projects?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... report (prescribed format) X (3) Economic viability and relief justification report (Royalty Suspension Viability Program (RSVP) model inputs justified with Geological and Geophysical (G&G), Engineering... template) (6) Determined to be economic only with relief X X X (d) The following table indicates by an X...

  2. 30 CFR 203.4 - How do the provisions in this part apply to different types of leases and projects?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... justification report (prescribed format) X (3) Economic viability and relief justification report (Royalty Suspension Viability Program (RSVP) model inputs justified with Geological and Geophysical (G&G), Engineering...) (6) Determined to be economic only with relief X X X (d) The following table indicates by an X, and...

  3. Electroencephalogy (EEG) Feedback in Decision-Making

    DTIC Science & Technology

    2015-08-26

    19   Variability  in  individual  subject   BCI  classification...approach traditionally used in single-trial BCI (Brain-Computer Interface) tasks suggested a similar effect-size and scalp distribution. However...situation. Although nearly all BCI paradigms have used a variant of the RSVP technique, there was no indication in the literature as to why this was

  4. 78 FR 17676 - Meeting Notice for the President's Advisory Council on Faith-Based and Neighborhood Partnerships

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-22

    ... event. Please RSVP to Ben O'Dell at [email protected] . The meeting will be available to the public... policies, programs, and practices. Contact Person for Additional Information: Please contact Ben O'Dell for.... Comments and questions can be sent in advance to [email protected] . Dated: March 19, 2013. Ben O'Dell...

  5. 75 FR 53980 - Notice of Field Tours for the Pinedale Anticline Working Group

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ...) PAWG will conduct field tours of the Pinedale Anticline Project Area (PAPA). Tours are open to the...-LXSI016K0000] Notice of Field Tours for the Pinedale Anticline Working Group AGENCY: Bureau of Land Management... RSVP no later than one week prior to each field trip to Shelley Gregory, BLM Pinedale Field Office, P.O...

  6. The Transgenic RNAi Project at Harvard Medical School: Resources and Validation.

    PubMed

    Perkins, Lizabeth A; Holderbaum, Laura; Tao, Rong; Hu, Yanhui; Sopko, Richelle; McCall, Kim; Yang-Zhou, Donghui; Flockhart, Ian; Binari, Richard; Shim, Hye-Seok; Miller, Audrey; Housden, Amy; Foos, Marianna; Randkelv, Sakara; Kelley, Colleen; Namgyal, Pema; Villalta, Christians; Liu, Lu-Ping; Jiang, Xia; Huan-Huan, Qiao; Wang, Xia; Fujiyama, Asao; Toyoda, Atsushi; Ayers, Kathleen; Blum, Allison; Czech, Benjamin; Neumuller, Ralph; Yan, Dong; Cavallaro, Amanda; Hibbard, Karen; Hall, Don; Cooley, Lynn; Hannon, Gregory J; Lehmann, Ruth; Parks, Annette; Mohr, Stephanie E; Ueda, Ryu; Kondo, Shu; Ni, Jian-Quan; Perrimon, Norbert

    2015-11-01

    To facilitate large-scale functional studies in Drosophila, the Drosophila Transgenic RNAi Project (TRiP) at Harvard Medical School (HMS) was established along with several goals: developing efficient vectors for RNAi that work in all tissues, generating a genome-scale collection of RNAi stocks with input from the community, distributing the lines as they are generated through existing stock centers, validating as many lines as possible using RT-qPCR and phenotypic analyses, and developing tools and web resources for identifying RNAi lines and retrieving existing information on their quality. With these goals in mind, here we describe in detail the various tools we developed and the status of the collection, which is currently composed of 11,491 lines and covering 71% of Drosophila genes. Data on the characterization of the lines either by RT-qPCR or phenotype is available on a dedicated website, the RNAi Stock Validation and Phenotypes Project (RSVP, http://www.flyrnai.org/RSVP.html), and stocks are available from three stock centers, the Bloomington Drosophila Stock Center (United States), National Institute of Genetics (Japan), and TsingHua Fly Center (China). Copyright © 2015 by the Genetics Society of America.

  7. Applications That Participate in Their Own Defense (APOD)

    DTIC Science & Technology

    2003-05-01

    bandwidth requirements from multiple applications and uses ssh to directly login the RSVP routers to reconfigure the priority queues. This approach...detect flooding. 3 Emerald makes use of some signature matching techniques on BSM logs, but the unique strength of Emerald technology is in event...mechanisms that provide awareness, and IDSs form an important class of these4. We investigated several COTS and research IDSs including Emerald

  8. 76 FR 62777 - Forum-Trends and Causes of Observed Changes in Heat Waves, Cold Waves, Floods and Drought

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-11

    ... public comments will be allotted based on the order in which RSVPs are received. Written comments may be submitted via email or in hardcopy and must be received by October 25, 2011. Please see addresses below... the forum must RSVP no later than 5 p.m. EDT, Tuesday, October 25, 2011. Deadline for Written Comments...

  9. Dynamic Bandwidth Provisioning Using Markov Chain Based on RSVP

    DTIC Science & Technology

    2013-09-01

    AUTHOR(S) Yavuz Sagir 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS (ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING...ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS (ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11...is finite or countable. A Markov process is basically a stochastic process in which the past history of the process is irrelevant if the current

  10. Using Time-Phased Casualty Estimates to Determine Medical Resupply Requirements

    DTIC Science & Technology

    2006-09-18

    calculated from the list of tasks. The RSVP-planned MTF laydown would be replaced by the reporting MTF with a known location. One advantage of...Another advantage is the ability to adapt quickly to changing requirements. Supplies that are used at a faster than initially forecast rate will...Officer ( GMO ) Platforms. San Diego, Calif: Naval Health Research Center; 2001. Technical Report No. 01-18. 5. Galarneau MR, Pang G, Konoske PJ

  11. Resource Allocation over a GRID Military Network

    DTIC Science & Technology

    2006-12-01

    The behaviour is called PHB (Per Hop Behaviour) and it is defined locally; i.e., it is not an end- to-end specification (as for RSVP) but it is...UNLIMITED UNCLASSIFIED/UNLIMITED The class selector PHB offers three forwarding priorities: Expedited Forwarding (EF) characterized by a minimum...14] J. Heinanen, F. Baker, W. Weiss, J. Wroclawski, “Assured Forwarding PHB Group,” IETF RFC 2597, June 1999. [15] E. Crawley, R. Nair, B

  12. Protocol Handbook,

    DTIC Science & Technology

    1985-04-01

    all invitations should be handwritten in black ink and addressed in the full name of the husband and wife unless the guest is single. Requesting an...34 is handwritten in black ink . If the reply is by telephone, the number is written directly beneath the R.S.V.P. (or a separate response card may be...styles. The card should be engraved with black ink on excellent quality card stock (usually white or cream in color). Script lettering is the most

  13. The Role of Memory Processes in Repetition Blindness

    NASA Technical Reports Server (NTRS)

    Johnston, James C.; Hochhaus, Larry; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    We investigated whether Repetition Blindness (RB) in processing RSVP strings depends critically on memory demands. When all items in the sequence had to be reported, strong RB was found. When only the 2 critical items (cued by color) had to be reported, no RB was found. Preliminary results show that imposing a separate memory load, while reporting only the critical items, also produces little RB. Implications for the processing locus of RB will be discussed.

  14. The Attentional Blink Does Not Prevent Character Identification

    NASA Technical Reports Server (NTRS)

    Ruthruff, Eric; Johnston, James C.; McCann, Robert; Reington, Roger W. (Technical Monitor)

    1998-01-01

    The standard RSVP Attentional Blink (AB) paradigm was modified so that RT to the second target could be measured. Character distortion, intended to prolong the letter-identification processing stage, had a marked effect at long lags (baseline condition), but virtually no effect at short lags where AB interference occurred. According to locus-of-slack logic, this pattern of results indicates that the attentional blink causes a processing bottleneck at some stage subsequent to letter identification.

  15. Attentional shifts between surfaces: effects on detection and early brain potentials.

    PubMed

    Pinilla, T; Cobo, A; Torres, K; Valdes-Sosa, M

    2001-06-01

    Two consecutive events transforming the same illusory surface in transparent motion (brief changes in direction) can be discriminated with ease, but a prolonged interference ( approximately 500 ms) on the discrimination of the second event arises when different surfaces are concerned [Valdes-Sosa, M., Cobo, A., & Pinilla, T. (2000). Attention to object files defined by transparent motion. Journal of Experimental Psychology: Human Perception and Performance, 26(2), 488-505]. Here we further characterise this phenomenon and compare it to the attentional blink AB [Shapiro, K.L., Raymond, J.E., & Arnell, K.M. (1994). Attention to visual pattern information produces the attentional blink in RSVP. Journal of Experimental Psychology: Human Perception and Performance, 20, 357-371]. Similar to the AB, reduced sensitivity (d') was found in the two-surface condition. However, the two-surface cost was associated with a reduced N1 brain response in contrast to reports for AB [Vogel, E.K., Luck, S.J., & Shapiro, K. (1998). Electrophysiological evidence for a postperceptual locus of suppression during the attentional blink. Journal of Experimental Psychology: Human Perception and Performance, 24(6), 1656-1674]. The results from this study indicate that the two-surface cost corresponds to competitive effects in early vision. Reasons for the discrepancy with the AB study are considered.

  16. An EEG-Based Person Authentication System with Open-Set Capability Combining Eye Blinking Signals

    PubMed Central

    Wu, Qunjian; Zeng, Ying; Zhang, Chi; Tong, Li; Yan, Bin

    2018-01-01

    The electroencephalogram (EEG) signal represents a subject’s specific brain activity patterns and is considered as an ideal biometric given its superior forgery prevention. However, the accuracy and stability of the current EEG-based person authentication systems are still unsatisfactory in practical application. In this paper, a multi-task EEG-based person authentication system combining eye blinking is proposed, which can achieve high precision and robustness. Firstly, we design a novel EEG-based biometric evoked paradigm using self- or non-self-face rapid serial visual presentation (RSVP). The designed paradigm could obtain a distinct and stable biometric trait from EEG with a lower time cost. Secondly, the event-related potential (ERP) features and morphological features are extracted from EEG signals and eye blinking signals, respectively. Thirdly, convolutional neural network and back propagation neural network are severally designed to gain the score estimation of EEG features and eye blinking features. Finally, a score fusion technology based on least square method is proposed to get the final estimation score. The performance of multi-task authentication system is improved significantly compared to the system using EEG only, with an increasing average accuracy from 92.4% to 97.6%. Moreover, open-set authentication tests for additional imposters and permanence tests for users are conducted to simulate the practical scenarios, which have never been employed in previous EEG-based person authentication systems. A mean false accepted rate (FAR) of 3.90% and a mean false rejected rate (FRR) of 3.87% are accomplished in open-set authentication tests and permanence tests, respectively, which illustrate the open-set authentication and permanence capability of our systems. PMID:29364848

  17. An EEG-Based Person Authentication System with Open-Set Capability Combining Eye Blinking Signals.

    PubMed

    Wu, Qunjian; Zeng, Ying; Zhang, Chi; Tong, Li; Yan, Bin

    2018-01-24

    The electroencephalogram (EEG) signal represents a subject's specific brain activity patterns and is considered as an ideal biometric given its superior forgery prevention. However, the accuracy and stability of the current EEG-based person authentication systems are still unsatisfactory in practical application. In this paper, a multi-task EEG-based person authentication system combining eye blinking is proposed, which can achieve high precision and robustness. Firstly, we design a novel EEG-based biometric evoked paradigm using self- or non-self-face rapid serial visual presentation (RSVP). The designed paradigm could obtain a distinct and stable biometric trait from EEG with a lower time cost. Secondly, the event-related potential (ERP) features and morphological features are extracted from EEG signals and eye blinking signals, respectively. Thirdly, convolutional neural network and back propagation neural network are severally designed to gain the score estimation of EEG features and eye blinking features. Finally, a score fusion technology based on least square method is proposed to get the final estimation score. The performance of multi-task authentication system is improved significantly compared to the system using EEG only, with an increasing average accuracy from 92.4% to 97.6%. Moreover, open-set authentication tests for additional imposters and permanence tests for users are conducted to simulate the practical scenarios, which have never been employed in previous EEG-based person authentication systems. A mean false accepted rate (FAR) of 3.90% and a mean false rejected rate (FRR) of 3.87% are accomplished in open-set authentication tests and permanence tests, respectively, which illustrate the open-set authentication and permanence capability of our systems.

  18. TOPOGRAPHICALLY GUIDED LASIK FOR MYOPIA USING THE NIDEK CXII CUSTOMIZED ASPHERIC TREATMENT ZONE (CATZ)

    PubMed Central

    Waring, George; Dougherty, Paul J.; Chayet, Arturo; Fischer, Jeffery; Fant, Barbara; Stevens, Gary; Bains, Harkaran S.

    2007-01-01

    Purpose To assess the efficacy, predictability, and safety of topography-guided laser in situ keratomileusis (LASIK) for the surgical correction of low to moderate myopia with astigmatism using the Nidek CXIII excimer laser equipped with the customized aspheric treatment zone (CATz) algorithm. Methods In a multicenter US Food and Drug Administration study of topography-guided LASIK, 4 centers enrolled 135 eyes with manifest refraction sphere that ranged from −0.50 to −7.00 D (mean, −3.57 ± 1.45) with up to −4.00 D of astigmatism (mean, −1.02 ± 0.64 D). The intended outcome was plano in all eyes. Refractive outcomes and higher-order aberrations were analyzed preoperatively and postoperatively. Patient satisfaction was assessed using both the validated Refractive Status and Vision Profile (RSVP) questionnaire and a questionnaire designed for this study. Six-month postoperative outcomes are reported here. Results By 6 months postoperatively, the manifest refraction spherical equivalent (MRSE) for all eyes was −0.09 ± 0.31 D. Six months postoperatively, 116 of 131 eyes (88.55%) had an uncorrected visual acuity of 20/20 or better, and 122 of 131 eyes (93.13%) had a MRSE within ±0.50 D. Distance best spectacle-corrected visual acuity (BSCVA) increased by 2 or more lines in 21 of 131 eyes (19.01%), and no eyes lost 2 lines or more of BSCVA. The total ocular higher-order aberrations root-mean-square increased by 0.04 μm postoperatively. Patients reported significantly fewer night driving and glare and halo symptoms postoperatively than preoperatively. Conclusions Nidek CXIII CATz treatment of myopia with astigmatism is safe, efficacious, and predictable, and it reduces patient symptoms associated with night driving and glare and halo symptoms. PMID:18427614

  19. Out of the Corner of My Eye: Foveal Semantic Load Modulates Parafoveal Processing in Reading.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Payne, Brennan R.; Stites, Mallory C.; Federmeier, Kara D.

    In two experiments, we examined the impact of foveal semantic expectancy and congruity on parafoveal word processing during reading. Experiment 1 utilized an eye-tracking gaze contingent display change paradigm, and Experiment 2 measured event-related brain potentials (ERP) in a modified RSVP paradigm to track the time-course of foveal semantic influences on convert attentional allocation to parafoveal word processing. Furthermore, eye-tracking and ERP data converged to reveal graded effects of semantic foveal load on parafoveal processing.

  20. Out of the Corner of My Eye: Foveal Semantic Load Modulates Parafoveal Processing in Reading.

    DOE PAGES

    Payne, Brennan R.; Stites, Mallory C.; Federmeier, Kara D.

    2016-07-18

    In two experiments, we examined the impact of foveal semantic expectancy and congruity on parafoveal word processing during reading. Experiment 1 utilized an eye-tracking gaze contingent display change paradigm, and Experiment 2 measured event-related brain potentials (ERP) in a modified RSVP paradigm to track the time-course of foveal semantic influences on convert attentional allocation to parafoveal word processing. Furthermore, eye-tracking and ERP data converged to reveal graded effects of semantic foveal load on parafoveal processing.

  1. Serifs and font legibility

    PubMed Central

    Arditi, Aries; Cho, Jianna

    2015-01-01

    Using lower-case fonts varying only in serif size (0%, 5%, and 10% cap height), we assessed legibility using size thresholds and reading speed. Five percentage serif fonts were slightly more legible than sans serif, but the average inter-letter spacing increase that serifs themselves impose, predicts greater enhancement than we observed. RSVP and continuous reading speeds showed no effect of serifs. When text is small or distant, serifs may, then, produce a tiny legibility increase due to the concomitant increase in spacing. However, our data exhibited no difference in legibility between typefaces that differ only in the presence or absence of serifs. PMID:16099015

  2. Association Of Tricuspid Regurgitation And Severity Of Mitral Stenosis In Patients With Rheumatic Heart Disease.

    PubMed

    Ahmed, Rehan; Kazmi, Nasir; Naz, Farhat; Malik, Saqib; Gillani, Saima

    2016-01-01

    Rheumatic heart disease is a common ailment in Pakistan and Mitral stenosis is its flag bearer Severity of mitral stenosis is the key factor in deciding for mitral valve surgery. This case series study was conducted at Ayub Teaching Hospital .Cases of Rheumatic heart disease with mitral stenosis were diagnosed clinically. 2D echocardiography was used to find severity of mitral stenosis. Data was entered into SPSS-17.0 and results were recorded and analysed. Pearson's two tailed correlation was used to find the correlation between presence of tricuspid regurgitation in patients with severe mitral stenosis, p was <0.05. A total 35 patients with pure mitral stenosis were included in study, out of which 8 were male and 27 were females. Mean age in males was 34.5±15.85 years while in females it was 31±8 years. Twenty-two out of 35 (62.86%) patients had tricuspid regurgitation while 13 out 35 (37.14%) had no tricuspid regurgitation. Mean (MVA) mitral valve area in patients with tricuspid regurgitation was 0.84±0.3 cm2 while mean (MVA) mitral valve area in patients without tricuspid regurgitation was 1.83±0.7 cm2. Mean left atrial (L.A) size was 45.23±1.5 mm2 in patients with tricuspid regurgitation, while it was 44.13±6.14 mm2 in patients without tricuspid regurgitation. Mean RSVP was 57.5mmHg in patients with tricuspid regurgitation while RSVP could not be calculated in patients without tricuspid regurgitation. It was concluded that tricuspid regurgitation was strongly associated with severe mitral stenosis as almost all patients with severe mitral stenosis had tricuspid regurgitation and none of the patients with mild mitral stenosis had tricuspid regurgitation.

  3. Face adaptation does not improve performance on search or discrimination tasks

    PubMed Central

    Ng, Minna; Boynton, Geoffrey M.; Fine, Ione

    2011-01-01

    The face adaptation effect, as described by M. A. Webster and O. H. MacLin (1999), is a robust perceptual shift in the appearance of faces after a brief adaptation period. For example, prolonged exposure to Asian faces causes a Eurasian face to appear distinctly Caucasian. This adaptation effect has been documented for general configural effects, as well as for the facial properties of gender, ethnicity, expression, and identity. We began by replicating the finding that adaptation to ethnicity, gender, and a combination of both features induces selective shifts in category appearance. We then investigated whether this adaptation has perceptual consequences beyond a shift in the perceived category boundary by measuring the effects of adaptation on RSVP, spatial search, and discrimination tasks. Adaptation had no discernable effect on performance for any of these tasks. PMID:18318604

  4. Face adaptation does not improve performance on search or discrimination tasks.

    PubMed

    Ng, Minna; Boynton, Geoffrey M; Fine, Ione

    2008-01-04

    The face adaptation effect, as described by M. A. Webster and O. H. MacLin (1999), is a robust perceptual shift in the appearance of faces after a brief adaptation period. For example, prolonged exposure to Asian faces causes a Eurasian face to appear distinctly Caucasian. This adaptation effect has been documented for general configural effects, as well as for the facial properties of gender, ethnicity, expression, and identity. We began by replicating the finding that adaptation to ethnicity, gender, and a combination of both features induces selective shifts in category appearance. We then investigated whether this adaptation has perceptual consequences beyond a shift in the perceived category boundary by measuring the effects of adaptation on RSVP, spatial search, and discrimination tasks. Adaptation had no discernable effect on performance for any of these tasks.

  5. Z3 topological order in the face-centered-cubic quantum plaquette model

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep

    2018-04-01

    We examine the topological order in the resonating singlet valence plaquette (RSVP) phase of the hard-core quantum plaquette model (QPM) on the face centered cubic (FCC) lattice. To do this, we construct a Rohksar-Kivelson type Hamiltonian of local plaquette resonances. This model is shown to exhibit a Z3 topological order, which we show by identifying a Z3 topological constant (which leads to a 33-fold topological ground state degeneracy on the 3-torus) and topological pointlike charge and looplike magnetic excitations which obey Z3 statistics. We also consider an exactly solvable generalization of this model, which makes the geometrical origin of the Z3 order explicitly clear. For other models and lattices, such generalizations produce a wide variety of topological phases, some of which are novel fracton phases.

  6. Getting ahead of yourself: Parafoveal word expectancy modulates the N400 during sentence reading

    DOE PAGES

    Stites, Mallory C.; Payne, Brennan R.; Federmeier, Kara D.

    2017-01-18

    An important question in the reading literature regards the nature of the semantic information readers can extract from the parafovea (i.e., the next word in a sentence). Recent eye-tracking findings have found a semantic parafoveal preview benefit under many circumstances, and findings from event-related brain potentials (ERPs) also suggest that readers can at least detect semantic anomalies parafoveally. We use ERPs to ask whether fine-grained aspects of semantic expectancy can affect the N400 elicited by a word appearing in the parafovea. In an RSVP-with-flankers paradigm, sentences were presented word by word, flanked 2° bilaterally by the previous and upcoming words.more » Stimuli consisted of high constraint sentences that were identical up to the target word, which could be expected, unexpected but plausible, or anomalous, as well as low constraint sentences that were always completed with the most expected ending. Findings revealed an N400 effect to the target word when it appeared in the parafovea, which was graded with respect to the target’s expectancy and congruency within the sentence context. Moreover, when targets appeared at central fixation, this graded congruency effect was mitigated, suggesting that the semantic information gleaned from parafoveal vision functionally changes the semantic processing of those words when foveated.« less

  7. Getting ahead of yourself: Parafoveal word expectancy modulates the N400 during sentence reading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stites, Mallory C.; Payne, Brennan R.; Federmeier, Kara D.

    An important question in the reading literature regards the nature of the semantic information readers can extract from the parafovea (i.e., the next word in a sentence). Recent eye-tracking findings have found a semantic parafoveal preview benefit under many circumstances, and findings from event-related brain potentials (ERPs) also suggest that readers can at least detect semantic anomalies parafoveally. We use ERPs to ask whether fine-grained aspects of semantic expectancy can affect the N400 elicited by a word appearing in the parafovea. In an RSVP-with-flankers paradigm, sentences were presented word by word, flanked 2° bilaterally by the previous and upcoming words.more » Stimuli consisted of high constraint sentences that were identical up to the target word, which could be expected, unexpected but plausible, or anomalous, as well as low constraint sentences that were always completed with the most expected ending. Findings revealed an N400 effect to the target word when it appeared in the parafovea, which was graded with respect to the target’s expectancy and congruency within the sentence context. Moreover, when targets appeared at central fixation, this graded congruency effect was mitigated, suggesting that the semantic information gleaned from parafoveal vision functionally changes the semantic processing of those words when foveated.« less

  8. Young children's coding and storage of visual and verbal material.

    PubMed

    Perlmutter, M; Myers, N A

    1975-03-01

    36 preschool children (mean age 4.2 years) were each tested on 3 recognition memory lists differing in test mode (visual only, verbal only, combined visual-verbal). For one-third of the children, original list presentation was visual only, for another third, presentation was verbal only, and the final third received combined visual-verbal presentation. The subjects generally performed at a high level of correct responding. Verbal-only presentation resulted in less correct recognition than did either visual-only or combined visual-verbal presentation. However, because performances under both visual-only and combined visual-verbal presentation were statistically comparable, and a high level of spontaneous labeling was observed when items were presented only visually, a dual-processing conceptualization of memory in 4-year-olds was suggested.

  9. Temporal Influence on Awareness

    DTIC Science & Technology

    1995-12-01

    43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and

  10. A Comparison of the Connotative Meaning of Visuals Presented Singly and in Simultaneous and Sequential Juxtapositions.

    ERIC Educational Resources Information Center

    Noland, Mildred Jean

    A study was conducted investigating whether a sequence of visuals presented in a serial manner differs in connotative meaning from the same set of visuals presented simultaneously. How the meanings of pairs of shots relate to their constituent visuals was also explored. Sixteen pairs of visuals were presented to both male and female subjects in…

  11. Presentation-Oriented Visualization Techniques.

    PubMed

    Kosara, Robert

    2016-01-01

    Data visualization research focuses on data exploration and analysis, yet the vast majority of visualizations people see were created for a different purpose: presentation. Whether we are talking about charts showing data to help make a presenter's point, data visuals created to accompany a news story, or the ubiquitous infographics, many more people consume charts than make them. Traditional visualization techniques treat presentation as an afterthought, but are there techniques uniquely suited to data presentation but not necessarily ideal for exploration and analysis? This article focuses on presentation-oriented techniques, considering their usefulness for presentation first and any other purposes as secondary.

  12. Quantifiers more or less quantify online: ERP evidence for partial incremental interpretation

    PubMed Central

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge (Farmers grow crops/worms as their primary source of income), Experiment 1 found larger N400s for atypical (worms) than typical objects (crops). Experiment 2 crossed object typicality with non-logical subject-noun phrase quantifiers (most, few). Off-line plausibility ratings exhibited the crossover interaction predicted by full quantifier interpretation: Most farmers grow crops and Few farmers grow worms were rated more plausible than Most farmers grow worms and Few farmers grow crops. Object N400s, although modulated in the expected direction, did not reverse. Experiment 3 replicated these findings with adverbial quantifiers (Farmers often/rarely grow crops/worms). Interpretation of quantifier expressions thus is neither fully immediate nor fully delayed. Furthermore, object atypicality was associated with a frontal slow positivity in few-type/rarely quantifier contexts, suggesting systematic processing differences among quantifier types. PMID:20640044

  13. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  14. Meaning and Identities: A Visual Performative Pedagogy for Socio-Cultural Learning

    ERIC Educational Resources Information Center

    Grushka, Kathryn

    2009-01-01

    In this article I present personalised socio-cultural inquiry in visual art education as a critical and expressive material praxis. The model of "Visual Performative Pedagogy and Communicative Proficiency for the Visual Art Classroom" is presented as a legitimate means of manipulating visual codes, communicating meaning and mediating…

  15. Tips for better visual elements in posters and podium presentations.

    PubMed

    Zerwic, J J; Grandfield, K; Kavanaugh, K; Berger, B; Graham, L; Mershon, M

    2010-08-01

    The ability to effectively communicate through posters and podium presentations using appropriate visual content and style is essential for health care educators. To offer suggestions for more effective visual elements of posters and podium presentations. We present the experiences of our multidisciplinary publishing group, whose combined experiences and collaboration have provided us with an understanding of what works and how to achieve success when working on presentations and posters. Many others would offer similar advice, as these guidelines are consistent with effective presentation. FINDINGS/SUGGESTIONS: Certain visual elements should be attended to in any visual presentation: consistency, alignment, contrast and repetition. Presentations should be consistent in font size and type, line spacing, alignment of graphics and text, and size of graphics. All elements should be aligned with at least one other element. Contrasting light background with dark text (and vice versa) helps an audience read the text more easily. Standardized formatting lets viewers know when they are looking at similar things (tables, headings, etc.). Using a minimal number of colors (four at most) helps the audience more easily read text. For podium presentations, have one slide for each minute allotted for speaking. The speaker is also a visual element; one should not allow the audience's view of either the presentation or presenter to be blocked. Making eye contact with the audience also keeps them visually engaged. Health care educators often share information through posters and podium presentations. These tips should help the visual elements of presentations be more effective.

  16. Visual Disability Among Juvenile Open-angle Glaucoma Patients.

    PubMed

    Gupta, Viney; Ganesan, Vaitheeswaran L; Kumar, Sandip; Chaurasia, Abadh K; Malhotra, Sumit; Gupta, Shikha

    2018-04-01

    Juvenile onset primary open-angle glaucoma (JOAG) unlike adult onset primary open-angle glaucoma presents with high intraocular pressure and diffuse visual field loss, which if left untreated leads to severe visual disability. The study aimed to evaluate the extent of visual disability among JOAG patients presenting to a tertiary eye care facility. Visual acuity and perimetry records of unrelated JOAG patients presenting to our Glaucoma facility were analyzed. Low vision and blindness was categorized by the WHO criteria and percentage impairment was calculated as per the guidelines provided by the American Medical Association (AMA). Fifty-two (15%) of the 348 JOAG patients were bilaterally blind at presentation and 32 (9%) had low vision according to WHO criteria. Ninety JOAG patients (26%) had a visual impairment of 75% or more. Visual disability at presentation among JOAG patients is high. This entails a huge economic burden, given their young age and associated social responsibilities.

  17. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  18. Adams-Based Rover Terramechanics and Mobility Simulator - ARTEMIS

    NASA Technical Reports Server (NTRS)

    Trease, Brian P.; Lindeman, Randel A.; Arvidson, Raymond E.; Bennett, Keith; VanDyke, Lauren P.; Zhou, Feng; Iagnemma, Karl; Senatore, Carmine

    2013-01-01

    The Mars Exploration Rovers (MERs), Spirit and Opportunity, far exceeded their original drive distance expectations and have traveled, at the time of this reporting, a combined 29 kilometers across the surface of Mars. The Rover Sequencing and Visualization Program (RSVP), the current program used to plan drives for MERs, is only a kinematic simulator of rover movement. Therefore, rover response to various terrains and soil types cannot be modeled. Although sandbox experiments attempt to model rover-terrain interaction, these experiments are time-intensive and costly, and they cannot be used within the tactical timeline of rover driving. Imaging techniques and hazard avoidance features on MER help to prevent the rover from traveling over dangerous terrains, but mobility issues have shown that these methods are not always sufficient. ARTEMIS, a dynamic modeling tool for MER, allows planned drives to be simulated before commands are sent to the rover. The deformable soils component of this model allows rover-terrain interactions to be simulated to determine if a particular drive path would take the rover over terrain that would induce hazardous levels of slip or sink. When used in the rover drive planning process, dynamic modeling reduces the likelihood of future mobility issues because high-risk areas could be identified before drive commands are sent to the rover, and drives planned over these areas could be rerouted. The ARTEMIS software consists of several components. These include a preprocessor, Digital Elevation Models (DEMs), Adams rover model, wheel and soil parameter files, MSC Adams GUI (commercial), MSC Adams dynamics solver (commercial), terramechanics subroutines (FORTRAN), a contact detection engine, a soil modification engine, and output DEMs of deformed soil. The preprocessor is used to define the terrain (from a DEM) and define the soil parameters for the terrain file. The Adams rover model is placed in this terrain. Wheel and soil parameter files can be altered in the respective text files. The rover model and terrain are viewed in Adams View, the GUI for ARTEMIS. The Adams dynamics solver calls terramechanics subroutines in FORTRAN containing the Bekker-Wong equations.

  19. Dissociation of the Neural Correlates of Visual and Auditory Contextual Encoding

    ERIC Educational Resources Information Center

    Gottlieb, Lauren J.; Uncapher, Melina R.; Rugg, Michael D.

    2010-01-01

    The present study contrasted the neural correlates of encoding item-context associations according to whether the contextual information was visual or auditory. Subjects (N = 20) underwent fMRI scanning while studying a series of visually presented pictures, each of which co-occurred with either a visually or an auditorily presented name. The task…

  20. Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?

    PubMed Central

    Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.

    2015-01-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799

  1. Why do pictures, but not visual words, reduce older adults' false memories?

    PubMed

    Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R

    2015-09-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  2. Role of inter-hemispheric transfer in generating visual evoked potentials in V1-damaged brain hemispheres

    PubMed Central

    Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.

    2015-01-01

    Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450

  3. Tips for Better Visual Elements in Posters and Podium Presentations

    PubMed Central

    Zerwic, JJ; Grandfield, K; Kavanaugh, K; Berger, B; Graham, L; Mershon, M

    2010-01-01

    Context The ability to effectively communicate through posters and podium presentations using appropriate visual content and style is essential for health care educators. Objectives To offer suggestions for more effective visual elements of posters and podium presentations. Methods We present the experiences of our multidisciplinary publishing group, whose combined experiences and collaboration have provided us with an understanding of what works and how to achieve success when working on presentations and posters. Many others would offer similar advice, as these guidelines are consistent with effective presentation. Findings/Suggestions Certain visual elements should be attended to in any visual presentation: consistency, alignment, contrast and repetition. Presentations should be consistent in font size and type, line spacing, alignment of graphics and text, and size of graphics. All elements should be aligned with at least one other element. Contrasting light background with dark text (and vice versa) helps an audience read the text more easily. Standardized formatting lets viewers know when they are looking at similar things (tables, headings, etc.). Using a minimal number of colors (four at most) helps the audience more easily read text. For podium presentations, have one slide for each minute allotted for speaking. The speaker is also a visual element; one should not allow the audience’s view of either the presentation or presenter to be blocked. Making eye contact with the audience also keeps them visually engaged. Conclusions Health care educators often share information through posters and podium presentations. These tips should help the visual elements of presentations be more effective. PMID:20853236

  4. Effects of Presentation Type and Visual Control in Numerosity Discrimination: Implications for Number Processing?

    PubMed Central

    Smets, Karolien; Moors, Pieter; Reynvoet, Bert

    2016-01-01

    Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967

  5. Study of target and non-target interplay in spatial attention task.

    PubMed

    Sweeti; Joshi, Deepak; Panigrahi, B K; Anand, Sneh; Santhosh, Jayasree

    2018-02-01

    Selective visual attention is the ability to selectively pay attention to the targets while inhibiting the distractors. This paper aims to study the targets and non-targets interplay in spatial attention task while subject attends to the target object present in one visual hemifield and ignores the distractor present in another visual hemifield. This paper performs the averaged evoked response potential (ERP) analysis and time-frequency analysis. ERP analysis agrees to the left hemisphere superiority over late potentials for the targets present in right visual hemifield. Time-frequency analysis performed suggests two parameters i.e. event-related spectral perturbation (ERSP) and inter-trial coherence (ITC). These parameters show the same properties for the target present in either of the visual hemifields but show the difference while comparing the activity corresponding to the targets and non-targets. In this way, this study helps to visualise the difference between targets present in the left and right visual hemifields and, also the targets and non-targets present in the left and right visual hemifields. These results could be utilised to monitor subjects' performance in brain-computer interface (BCI) and neurorehabilitation.

  6. Vision

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.

    1973-01-01

    Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.

  7. Visual Exemplification and Skin Cancer: The Utility of Exemplars in Promoting Skin Self-Exams and Atypical Nevi Identification.

    PubMed

    King, Andy J

    2016-07-01

    The present article reports an experiment investigating untested propositions of exemplification theory in the context of messages promoting early melanoma detection. The study tested visual exemplar presentation types, incorporating visual persuasion principles into the study of exemplification theory and strategic message design. Compared to a control condition, representative visual exemplification was more effective at increasing message effectiveness by eliciting a surprise response, which is consistent with predictions of exemplification theory. Furthermore, participant perception of congruency between the images and text interacted with the type of visual exemplification to explain variation in message effectiveness. Different messaging strategies influenced decision making as well, with the presentation of visual exemplars resulting in people judging the atypicality of moles more conservatively. Overall, results suggest that certain visual messaging strategies may result in unintended effects of presenting people information about skin cancer. Implications for practice are discussed.

  8. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  9. Visual feedback in stuttering therapy

    NASA Astrophysics Data System (ADS)

    Smolka, Elzbieta

    1997-02-01

    The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.

  10. An insect-inspired model for visual binding II: functional analysis and visual attention.

    PubMed

    Northcutt, Brandon D; Higgins, Charles M

    2017-04-01

    We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

  11. CONTROLLING STUDENT RESPONSES DURING VISUAL PRESENTATIONS--STUDIES IN TELEVISED INSTRUCTION, THE ROLE OF VISUALS IN VERBAL LEARNING, REPORT 2.

    ERIC Educational Resources Information Center

    GROPPER, GEORGE L.

    THIS IS A REPORT OF TWO STUDIES IN WHICH PRINCIPLES OF PROGRAMED INSTRUCTION WERE ADAPTED FOR VISUAL PRESENTATIONS. SCIENTIFIC DEMONSTRATIONS WERE PREPARED WITH A VISUAL PROGRAM AND A VERBAL PROGRAM ON--(1) ARCHIMEDES' LAW AND (2) FORCE AND PRESSURE. RESULTS SUGGESTED THAT RESPONSES ARE MORE READILY BROUGHT UNDER THE CONTROL OF VISUAL PRESENTATION…

  12. Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter

    2011-01-01

    The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…

  13. Presentation of Information on Visual Displays.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…

  14. More is still not better: testing the perturbation model of temporal reference memory across different modalities and tasks.

    PubMed

    Ogden, Ruth S; Jones, Luke A

    2009-05-01

    The ability of the perturbation model (Jones & Wearden, 2003) to account for reference memory function in a visual temporal generalization task and auditory and visual reproduction tasks was examined. In all tasks the number of presentations of the standard was manipulated (1, 3, or 5), and its effect on performance was compared. In visual temporal generalization the number of presentations of the standard did not affect the number of times the standard was correctly identified, nor did it affect the overall temporal generalization gradient. In auditory reproduction there was no effect of the number of times the standard was presented on mean reproductions. In visual reproduction mean reproductions were shorter when the standard was only presented once; however, this effect was reduced when a visual cue was provided before the first presentation of the standard. Whilst the results of all experiments are best accounted for by the perturbation model there appears to be some attentional benefit to multiple presentations of the standard in visual reproduction.

  15. Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.

    PubMed

    Kim, Jeesun; Davis, Chris; Groot, Christopher

    2009-12-01

    This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.

  16. Visual mental image generation does not overlap with visual short-term memory: a dual-task interference study.

    PubMed

    Borst, Gregoire; Niven, Elaine; Logie, Robert H

    2012-04-01

    Visual mental imagery and working memory are often assumed to play similar roles in high-order functions, but little is known of their functional relationship. In this study, we investigated whether similar cognitive processes are involved in the generation of visual mental images, in short-term retention of those mental images, and in short-term retention of visual information. Participants encoded and recalled visually or aurally presented sequences of letters under two interference conditions: spatial tapping or irrelevant visual input (IVI). In Experiment 1, spatial tapping selectively interfered with the retention of sequences of letters when participants generated visual mental images from aural presentation of the letter names and when the letters were presented visually. In Experiment 2, encoding of the sequences was disrupted by both interference tasks. However, in Experiment 3, IVI interfered with the generation of the mental images, but not with their retention, whereas spatial tapping was more disruptive during retention than during encoding. Results suggest that the temporary retention of visual mental images and of visual information may be supported by the same visual short-term memory store but that this store is not involved in image generation.

  17. Visual Displays and Contextual Presentations in Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Park, Ok-choon

    1998-01-01

    Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…

  18. The effect of two different visual presentation modalities on the narratives of mainstream grade 3 children.

    PubMed

    Klop, D; Engelbrecht, L

    2013-12-01

    This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.

  19. Effects of age, gender, and stimulus presentation period on visual short-term memory.

    PubMed

    Kunimi, Mitsunobu

    2016-01-01

    This study focused on age-related changes in visual short-term memory using visual stimuli that did not allow verbal encoding. Experiment 1 examined the effects of age and the length of the stimulus presentation period on visual short-term memory function. Experiment 2 examined the effects of age, gender, and the length of the stimulus presentation period on visual short-term memory function. The worst memory performance and the largest performance difference between the age groups were observed in the shortest stimulus presentation period conditions. The performance difference between the age groups became smaller as the stimulus presentation period became longer; however, it did not completely disappear. Although gender did not have a significant effect on d' regardless of the presentation period in the young group, a significant gender-based difference was observed for stimulus presentation periods of 500 ms and 1,000 ms in the older group. This study indicates that the decline in visual short-term memory observed in the older group is due to the interaction of several factors.

  20. Visual Aids for Positive Behavior Support of Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Kidder, Jaimee E.; McDonnell, Andrea P.

    2017-01-01

    Research suggests that many children with ASD are visual learners (Quill, 1997) and may struggle to comprehend expectations presented in a verbal mode only. Visually structured interventions present choices, expectations, tasks, and communication exchanges in a way that is appealing and approachable for visual learners. There are many types of…

  1. Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.

    ERIC Educational Resources Information Center

    Shama, Gilli; Dreyfus, Tommy

    1994-01-01

    Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…

  2. Visual hallucinations in schizophrenia: confusion between imagination and perception.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2008-05-01

    An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.

  3. A test of the orthographic recoding hypothesis

    NASA Astrophysics Data System (ADS)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  4. Attentional advantages in video-game experts are not related to perceptual tendencies.

    PubMed

    Wong, Nicole H L; Chang, Dorita H F

    2018-04-03

    Previous studies have suggested that extensive action video gaming may enhance perceptual and attentional capacities. Here, we probed whether attentional differences between video-game experts and non-experts hold when attention is selectively directed at global or local structures. We measured performance on a modified attentional-blink task using hierarchically structured stimuli that consisted of global and local elements. Stimuli carried congruent or incongruent information. In two experiments, we asked observers to direct their attention globally (Experiment 1) or locally (Experiment 2). In each RSVP trial, observers were asked to identify the identity of an initial target (T1), and detect the presence or absence of a second target (T2). Experts showed a markedly attenuated attentional blink, as quantified by higher T2 detection sensitivity, relative to non-experts, in both global and local tasks. Notably, experts and non-experts were comparably affected by stimulus congruency. We speculate that the observed visuo-attentional advantage is unlikely to be related to mere differences perceptual tendencies (i.e., greater global precedence), which has been previously associated with diminished attentional blink.

  5. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  6. Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.

    PubMed

    Yoshizaki, K

    2001-12-01

    The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.

  7. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  8. Stereoscopic applications for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  9. The Role of Sensory-Motor Information in Object Recognition: Evidence from Category-Specific Visual Agnosia

    ERIC Educational Resources Information Center

    Wolk, D.A.; Coslett, H.B.; Glosser, G.

    2005-01-01

    The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…

  10. [Diagnostic difficulties in a case of constricted tubular visual field].

    PubMed

    Dogaru, Oana-Mihaela; Rusu, Monica; Hâncu, Dacia; Horvath, Kárin

    2013-01-01

    In the paper below we present the clinical case of a 48 year old female with various symptoms associated with functional visual disturbance -constricted tubular visual fields, wich lasts from 6 years; the extensive clinical and paraclinical ophthalmological investigations ruled out the presence of an organic disorder. In the present, we suspect a diagnosis of hysteria, still uncertain, wich represented over time a big challenge in psychology and ophthalmology. The mechanisms and reasons for hysteria are still not clear and it could represent a fascinating research theme. The tunnel, spiral or star-shaped visual fields are specific findings in hysteria for patients who present visual disturbance. The question of whether or not a patient with hysterical visual impairment can or cannot "see" is still unresolved.

  11. Hemisphere division and its effect on selective attention: a generality examination of Lavie's load theory.

    PubMed

    Nishimura, Ritsuko; Yoshizaki, Kazuhito; Kato, Kimiko; Hatta, Takeshi

    2009-01-01

    The present study examined the role of visual presentation mode (unilateral vs. bilateral visual fields) on attentional modulation. We examined whether or not the presentation mode affects the compatibility effect, using a paradigm involving two task-relevant letter arrays. Sixteen participants identified a target letter among task-relevant letters while ignoring either a compatible or incompatible distracter letter that was presented to both hemispheres. Two letters arrays were presented to visual fields, either unilaterally or bilaterally. Results indicated that the compatibility effect was greater in bilateral than in unilateral visual field conditions. Findings support the assumption that the two hemispheres have separate attentional resources.

  12. Differential Age Effects on Spatial and Visual Working Memory

    ERIC Educational Resources Information Center

    Oosterman, Joukje M.; Morel, Sascha; Meijer, Lisette; Buvens, Cleo; Kessels, Roy P. C.; Postma, Albert

    2011-01-01

    The present study was intended to compare age effects on visual and spatial working memory by using two versions of the same task that differed only in presentation mode. The working memory task contained both a simultaneous and a sequential presentation mode condition, reflecting, respectively, visual and spatial working memory processes. Young…

  13. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Data Presentation and Visualization (DPV) Interface Control Document

    NASA Technical Reports Server (NTRS)

    Mazzone, Rebecca A.; Conroy, Michael P.

    2015-01-01

    Data Presentation and Visualization (DPV) is a subset of the modeling and simulation (M&S) capabilities at Kennedy Space Center (KSC) that endeavors to address the challenges of how to present and share simulation output for analysts, stakeholders, decision makers, and other interested parties. DPV activities focus on the development and provision of visualization tools to meet the objectives identified above, as well as providing supporting tools and capabilities required to make its visualization products available and accessible across NASA.

  15. Prototype Stop Bar System Evaluation at John F. Kennedy International Airport

    DTIC Science & Technology

    1992-09-01

    2 Red Stop Bar Visual Presentation 4 3 Green Stop Bar Visual Presentation 5 4 Photographs of Red and Green Inset Stop Bar Lights 6 5 Photographs of...to green. This provides pilots with a visual confirmation of the controller’s verbal clearance and is intended to prevent runway incursions. The Port...34 colocated with the red lights. The visual presentation of an individual stop bar appears as either five red lights (see figure 2), or five green

  16. Predictive and postdictive mechanisms jointly contribute to visual awareness.

    PubMed

    Soga, Ryosuke; Akaishi, Rei; Sakai, Katsuyuki

    2009-09-01

    One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.

  17. Association of auditory-verbal and visual hallucinations with impaired and improved recognition of colored pictures.

    PubMed

    Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana

    2015-09-01

    A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).

  18. Remembering verbally-presented items as pictures: Brain activity underlying visual mental images in schizophrenia patients with visual hallucinations.

    PubMed

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Cuevas-Esteban, Jorge; Cambra-Martí, Maria Rosa; Ochoa, Susana; Brébion, Gildas

    2017-09-01

    Previous research suggests that visual hallucinations in schizophrenia consist of mental images mistaken for percepts due to failure of the reality-monitoring processes. However, the neural substrates that underpin such dysfunction are currently unknown. We conducted a brain imaging study to investigate the role of visual mental imagery in visual hallucinations. Twenty-three patients with schizophrenia and 26 healthy participants were administered a reality-monitoring task whilst undergoing an fMRI protocol. At the encoding phase, a mixture of pictures of common items and labels designating common items were presented. On the memory test, participants were requested to remember whether a picture of the item had been presented or merely its label. Visual hallucination scores were associated with a liberal response bias reflecting propensity to erroneously remember pictures of the items that had in fact been presented as words. At encoding, patients with visual hallucinations differentially activated the right fusiform gyrus when processing the words they later remembered as pictures, which suggests the formation of visual mental images. On the memory test, the whole patient group activated the anterior cingulate and medial superior frontal gyrus when falsely remembering pictures. However, no differential activation was observed in patients with visual hallucinations, whereas in the healthy sample, the production of visual mental images at encoding led to greater activation of a fronto-parietal decisional network on the memory test. Visual hallucinations are associated with enhanced visual imagery and possibly with a failure of the reality-monitoring processes that enable discrimination between imagined and perceived events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Visual disability, visual function, and myopia among rural chinese secondary school children: the Xichang Pediatric Refractive Error Study (X-PRES)--report 1.

    PubMed

    Congdon, Nathan; Wang, Yunfei; Song, Yue; Choi, Kai; Zhang, Mingzhi; Zhou, Zhongxia; Xie, Zhenling; Li, Liping; Liu, Xueyu; Sharma, Abhishek; Wu, Bin; Lam, Dennis S C

    2008-07-01

    To evaluate visual acuity, visual function, and prevalence of refractive error among Chinese secondary-school children in a cross-sectional school-based study. Uncorrected, presenting, and best corrected visual acuity, cycloplegic autorefraction with refinement, and self-reported visual function were assessed in a random, cluster sample of rural secondary school students in Xichang, China. Among the 1892 subjects (97.3% of the consenting children, 84.7% of the total sample), mean age was 14.7 +/- 0.8 years, 51.2% were female, and 26.4% were wearing glasses. The proportion of children with uncorrected, presenting, and corrected visual disability (< or = 6/12 in the better eye) was 41.2%, 19.3%, and 0.5%, respectively. Myopia < -0.5, < -2.0, and < -6.0 D in both eyes was present in 62.3%, 31.1%, and 1.9% of the subjects, respectively. Among the children with visual disability when tested without correction, 98.7% was due to refractive error, while only 53.8% (414/770) of these children had appropriate correction. The girls had significantly (P < 0.001) more presenting visual disability and myopia < -2.0 D than did the boys. More myopic refractive error was associated with worse self-reported visual function (ANOVA trend test, P < 0.001). Visual disability in this population was common, highly correctable, and frequently uncorrected. The impact of refractive error on self-reported visual function was significant. Strategies and studies to understand and remove barriers to spectacle wear are needed.

  20. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    PubMed

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  1. Examining competing hypotheses for the effects of diagrams on recall for text.

    PubMed

    Ortegren, Francesca R; Serra, Michael J; England, Benjamin D

    2015-01-01

    Supplementing text-based learning materials with diagrams typically increases students' free recall and cued recall of the presented information. In the present experiments, we examined competing hypotheses for why this occurs. More specifically, although diagrams are visual, they also serve to repeat information from the text they accompany. Both visual presentation and repetition are known to aid students' recall of information. To examine to what extent diagrams aid recall because they are visual or repetitive (or both), we had college students in two experiments (n = 320) read a science text about how lightning storms develop before completing free-recall and cued-recall tests over the presented information. Between groups, we manipulated the format and repetition of target pieces of information in the study materials using a 2 (visual presentation of target information: diagrams present vs. diagrams absent) × 2 (repetition of target information: present vs. absent) between-participants factorial design. Repetition increased both the free recall and cued recall of target information, and this occurred regardless of whether that repetition was in the form of text or a diagram. In contrast, the visual presentation of information never aided free recall. Furthermore, visual presentation alone did not significantly aid cued recall when participants studied the materials once before the test (Experiment 1) but did when they studied the materials twice (Experiment 2). Taken together, the results of the present experiments demonstrate the important role of repetition (i.e., that diagrams repeat information from the text) over the visual nature of diagrams in producing the benefits of diagrams for recall.

  2. Remembering the Specific Visual Details of Presented Objects: Neuroimaging Evidence for Effects of Emotion

    ERIC Educational Resources Information Center

    Kensinger, Elizabeth A.; Schacter, Daniel L.

    2007-01-01

    Memories can be retrieved with varied amounts of visual detail, and the emotional content of information can influence the likelihood that visual detail is remembered. In the present fMRI experiment (conducted with 19 adults scanned using a 3T magnet), we examined the neural processes that correspond with recognition of the visual details of…

  3. "The Mask Who Wasn't There": Visual Masking Effect with the Perceptual Absence of the Mask

    ERIC Educational Resources Information Center

    Rey, Amandine Eve; Riou, Benoit; Muller, Dominique; Dabic, Stéphanie; Versace, Rémy

    2015-01-01

    Does a visual mask need to be perceptually present to disrupt processing? In the present research, we proposed to explore the link between perceptual and memory mechanisms by demonstrating that a typical sensory phenomenon (visual masking) can be replicated at a memory level. Experiment 1 highlighted an interference effect of a visual mask on the…

  4. Information Visualization and Proposing New Interface for Movie Retrieval System (IMDB)

    ERIC Educational Resources Information Center

    Etemadpour, Ronak; Masood, Mona; Belaton, Bahari

    2010-01-01

    This research studies the development of a new prototype of visualization in support of movie retrieval. The goal of information visualization is unveiling of large amounts of data or abstract data set using visual presentation. With this knowledge the main goal is to develop a 2D presentation of information on movies from the IMDB (Internet Movie…

  5. The Crossmodal Facilitation of Visual Object Representations by Sound: Evidence from the Backward Masking Paradigm

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results…

  6. Auditory enhancement of visual perception at threshold depends on visual abilities.

    PubMed

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Different Strokes for Different Folks: Visual Presentation Design between Disciplines

    PubMed Central

    Gomez, Steven R.; Jianu, Radu; Ziemkiewicz, Caroline; Guo, Hua; Laidlaw, David H.

    2015-01-01

    We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard “chalk talks”. We found design differences in slideshows using two methods – coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant’s own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information. PMID:26357149

  8. Different Strokes for Different Folks: Visual Presentation Design between Disciplines.

    PubMed

    Gomez, S R; Jianu, R; Ziemkiewicz, C; Guo, Hua; Laidlaw, D H

    2012-12-01

    We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard "chalk talks". We found design differences in slideshows using two methods - coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant's own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information.

  9. Risk Factors for Visual Impairment in an Uninsured Population and the Impact of the Affordable Care Act.

    PubMed

    Guo, Weixia; Woodward, Maria A; Heisler, Michele; Blachley, Taylor; Corneail, Leah; Cederna, Jean; Kaplan, Ariane D; Newman Casey, Paula Anne

    2016-01-01

    To assess risk factors for visual impairment in a high-risk population of people: those without medical insurance. Secondarily, we assessed risk factors for remaining uninsured after implementation of the Affordable Care Act (ACA) and evaluated whether the ACA changed demand for local safety net ophthalmology clinic services one year after its implementation. In a retrospective cohort study of patients who attended a community-academic partnership free ophthalmology clinic in Southeastern, Michigan between September 2012 - March 2015, we assessed the prevalence of presenting with visual impairment, the most common causes of presenting with visual impairment and used logistic regression to assess socio-demographic risk factors for visual impairment. We assessed the initial impact of the ACA on clinic utilization. We also analyzed risk factors for remaining uninsured one year after implementation of the ACA private insurance marketplace and Medicaid expansion in the state of Michigan. Among 335 patients, one-fifth (22%) presented with visual impairment; refractive error was the leading cause for presenting with visual impairment. Unemployment was the single significant risk factor for presenting with visual impairment after adjusting for multiple confounding factors (OR = 3.05, 95% CI 1.19-7.87, p=0.01). There was no difference in proportion of visual impairment or type of vision-threatening disease between the insured and uninsured (p=0.26). Seventy six percent of patients remained uninsured one year after ACA implementation. Patients who were white, spoke English as a first language and were US Citizens were more likely to gain insurance coverage through the ACA in our population (p≤ 0.01). There was a non-significant decline in the mean number of patient treated per clinic (52 to 43) before and after ACA implementation (p=0.69). Refractive error was a leading cause for presenting with visual impairment in this vulnerable population, and being unemployed significantly increased the risk for presenting with visual impairment. The ACA did not significantly reduce the need for our free ophthalmology services. It is critically important to continue to support safety net specialty care initiatives and policy change to provide care for those in need.

  10. Risk Factors for Visual Impairment in an Uninsured Population and the Impact of the Affordable Care Act

    PubMed Central

    Guo, Weixia; Woodward, Maria A; Heisler, Michele; Blachley, Taylor; Corneail, Leah; Cederna, Jean; Kaplan, Ariane D; Newman Casey, Paula Anne

    2017-01-01

    Purpose To assess risk factors for visual impairment in a high-risk population of people: those without medical insurance. Secondarily, we assessed risk factors for remaining uninsured after implementation of the Affordable Care Act (ACA) and evaluated whether the ACA changed demand for local safety net ophthalmology clinic services one year after its implementation. Methods In a retrospective cohort study of patients who attended a community-academic partnership free ophthalmology clinic in Southeastern, Michigan between September 2012 – March 2015, we assessed the prevalence of presenting with visual impairment, the most common causes of presenting with visual impairment and used logistic regression to assess socio-demographic risk factors for visual impairment. We assessed the initial impact of the ACA on clinic utilization. We also analyzed risk factors for remaining uninsured one year after implementation of the ACA private insurance marketplace and Medicaid expansion in the state of Michigan. Results Among 335 patients, one-fifth (22%) presented with visual impairment; refractive error was the leading cause for presenting with visual impairment. Unemployment was the single significant risk factor for presenting with visual impairment after adjusting for multiple confounding factors (OR = 3.05, 95% CI 1.19–7.87, p=0.01). There was no difference in proportion of visual impairment or type of vision-threatening disease between the insured and uninsured (p=0.26). Seventy six percent of patients remained uninsured one year after ACA implementation. Patients who were white, spoke English as a first language and were US Citizens were more likely to gain insurance coverage through the ACA in our population (p≤ 0.01). There was a non-significant decline in the mean number of patient treated per clinic (52 to 43) before and after ACA implementation (p=0.69). Conclusion Refractive error was a leading cause for presenting with visual impairment in this vulnerable population, and being unemployed significantly increased the risk for presenting with visual impairment. The ACA did not significantly reduce the need for our free ophthalmology services. It is critically important to continue to support safety net specialty care initiatives and policy change to provide care for those in need. PMID:28593201

  11. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  12. Social media interruption affects the acquisition of visually, not aurally, acquired information during a pathophysiology lecture.

    PubMed

    Marone, Jane R; Thakkar, Shivam C; Suliman, Neveen; O'Neill, Shannon I; Doubleday, Alison F

    2018-06-01

    Poor academic performance from extensive social media usage appears to be due to students' inability to multitask between distractions and academic work. However, the degree to which visually distracted students can acquire lecture information presented aurally is unknown. This study examined the ability of students visually distracted by social media to acquire information presented during a voice-over PowerPoint lecture, and to compare performance on examination questions derived from information presented aurally vs. that presented visually. Students ( n = 20) listened to a 42-min cardiovascular pathophysiology lecture containing embedded cartoons while taking notes. The experimental group ( n = 10) was visually, but not aurally, distracted by social media during times when cartoon information was presented, ~40% of total lecture time. Overall performance among distracted students on a follow-up, open-note quiz was 30% poorer than that for controls ( P < 0.001). When the modality of presentation (visual vs. aural) was compared, performance decreased on examination questions from information presented visually. However, performance on questions from information presented aurally was similar to that of controls. Our findings suggest the ability to acquire information during lecture may vary, depending on the degree of competition between the modalities of the distraction and the lecture presentation. Within the context of current literature, our findings also suggest that timing of the distraction relative to delivery of material examined affects performance more than total distraction time. Therefore, when delivering lectures, instructors should incorporate organizational cues and active learning strategies that assist students in maintaining focus and acquiring relevant information.

  13. EventSlider User Manual

    DTIC Science & Technology

    2016-09-01

    is a Windows Presentation Foundation (WPF) control developed using the .NET framework in Microsoft Visual Studio. As a WPF control, it can be used in...any WPF application as a graphical visual element. The purpose of the control is to visually display time-related events as vertical lines on a...available on the control. 15. SUBJECT TERMS Windows Presentation Foundation, WPF, control, C#, .NET framework, Microsoft Visual Studio 16. SECURITY

  14. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  15. Long-term visual outcomes of craniopharyngioma in children.

    PubMed

    Wan, Michael J; Zapotocky, Michal; Bouffet, Eric; Bartels, Ute; Kulkarni, Abhaya V; Drake, James M

    2018-05-01

    Visual function is a critical factor in the diagnosis, monitoring, and prognosis of craniopharyngiomas in children. The aim of this study was to report the long-term visual outcomes in a cohort of pediatric patients with craniopharyngioma. The study design is a retrospective chart review of craniopharyngioma patients from a single tertiary-care pediatric hospital. 59 patients were included in the study. Mean age at presentation was 9.4 years old (range 0.7-18.0 years old). The most common presenting features were headache (76%), nausea/vomiting (32%), and vision loss (31%). Median follow-up was 5.2 years (range 1.0-17.2 years). During follow-up, visual decline occurred in 17 patients (29%). On Kaplan Meier survival analysis, 47% of the cases of visual decline occurred within 4 months of diagnosis, with the remaining cases occurring sporadically during follow-up (up to 8 years after diagnosis). In terms of risk factors, younger age at diagnosis, optic nerve edema at presentation, and tumor recurrence were found to have statistically significant associations with visual decline. At final follow-up, 58% of the patients had visual impairment in at least one eye but only 10% were legally blind in both eyes (visual acuity 20/200 or worse or < 20° of visual field). Vision loss is a common presenting symptom of craniopharyngiomas in children. After diagnosis, monitoring vision is important as about 30% of patients will experience significant visual decline. Long-term vision loss occurs in the majority of patients, but severe binocular visual impairment is uncommon.

  16. Effects of using visualization and animation in presentations to communities about forest succession and fire behavior potential

    Treesearch

    Jane Kapler Smith; Donald E. Zimmerman; Carol Akerelrea; Garrett O' Keefe

    2008-01-01

    Natural resource managers use a variety of computer-mediated presentation methods to communicate management practices to the public. We explored the effects of using the Stand Visualization System to visualize and animate predictions from the Forest Vegetation Simulator-Fire and Fuels Extension in presentations explaining forest succession (forest growth and change...

  17. Visual communication in presentation on physics

    NASA Astrophysics Data System (ADS)

    Grebenyuk, Konstantin A.

    2005-06-01

    It is essential that our audience be attentive during lecture, report or another presentation on physics. Therefore we have to take care of both speech and visual communication with audience. Three important aspects of successful visual aids use are singled out in this paper. The main idea is that physicists could appreciably increase efficiency of their presentations by use of these simple principles of presentation art. Recommendations offered are results of special literature research, author' s observations and experience of communication with skilled masters of presentations.

  18. Clinical Profile and Visual Outcome of Ocular Bartonellosis in Malaysia

    PubMed Central

    Tan, Chai Lee; Fhun, Lai Chan; Abdul Gani, Nor Hasnida; Muhammed, Julieana; Tuan Jaafar, Tengku Norina

    2017-01-01

    Background. Ocular bartonellosis can present in various ways, with variable visual outcome. There is limited data on ocular bartonellosis in Malaysia. Objective. We aim to describe the clinical presentation and visual outcome of ocular bartonellosis in Malaysia. Materials and Methods. This was a retrospective review of patients treated for ocular bartonellosis in two ophthalmology centers in Malaysia between January 2013 and December 2015. The diagnosis was based on clinical features, supported by a positive Bartonella spp. serology. Results. Of the 19 patients in our series, females were predominant (63.2%). The mean age was 29.3 years. The majority (63.2%) had unilateral involvement. Five patients (26.3%) had a history of contact with cats. Neuroretinitis was the most common presentation (62.5%). Azithromycin was the antibiotic of choice (42.1%). Concurrent systemic corticosteroids were used in approximately 60% of cases. The presenting visual acuity was worse than 6/18 in approximately 60% of eyes; on final review, 76.9% of eyes had a visual acuity better than 6/18. Conclusion. Ocular bartonellosis tends to present with neuroretinitis. Azithromycin is a viable option for treatment. Systemic corticosteroids may be considered in those with poor visual acuity on presentation. PMID:28265290

  19. Explaining the Colavita visual dominance effect.

    PubMed

    Spence, Charles

    2009-01-01

    The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.

  20. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  1. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    PubMed

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.

  2. Ultra-Rapid serial visual presentation reveals dynamics of feedforward and feedback processes in the ventral visual pathway.

    PubMed

    Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios

    2018-06-21

    Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.

  3. Clinical presentation and visual status of retinitis pigmentosa patients: a multicenter study in southwestern Nigeria.

    PubMed

    Onakpoya, Oluwatoyin Helen; Adeoti, Caroline Olufunlayo; Oluleye, Tunji Sunday; Ajayi, Iyiade Adeseye; Majengbasan, Timothy; Olorundare, Olayemi Kolawole

    2016-01-01

    To review the visual status and clinical presentation of patients with retinitis pigmentosa (RP). Multicenter, retrospective, and analytical review was conducted of the visual status and clinical characteristics of patients with RP at first presentation from January 2007 to December 2011. Main outcome measure was the World Health Organization's visual status classification in relation to sex and age at presentation. Data analysis by SPSS (version 15) and statistical significance was assumed at P<0.05. One hundred and ninety-two eyes of 96 patients with mean age of 39.08±18.5 years and mode of 25 years constituted the study population; 55 (57.3%) were males and 41 (42.7%) females. Loss of vision 67 (69.8%) and night blindness 56 (58.3%) were the leading symptoms. Twenty-one (21.9%) patients had a positive family history, with RP present in their siblings 15 (71.4%), grandparents 11 (52.3%), and parents 4 (19.4%). Forty (41.7%) were blind at presentation and 23 (24%) were visually impaired. Blindness in six (15%) patients was secondary to glaucoma. Retinal vascular narrowing and retinal pigmentary changes of varying severity were present in all patients. Thirty-five (36.5%) had maculopathy, 36 (37.5%) refractive error, 19 (20%) lenticular opacities, and eleven (11.5%) had glaucoma. RP was typical in 85 patients (88.5%). Older patients had higher rates of blindness at presentation (P=0.005); blindness and visual impairment rate at presentation were higher in males than females (P=0.029). Clinical presentation with advanced diseases, higher blindness rate in older patients, sex-related difference in blindness/visual impairment rates, as well as high glaucoma blindness in RP patients requires urgent attention in southwestern Nigeria.

  4. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  5. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  6. Bringing color to emotion: The influence of color on attentional bias to briefly presented emotional images.

    PubMed

    Bekhtereva, Valeria; Müller, Matthias M

    2017-10-01

    Is color a critical feature in emotional content extraction and involuntary attentional orienting toward affective stimuli? Here we used briefly presented emotional distractors to investigate the extent to which color information can influence the time course of attentional bias in early visual cortex. While participants performed a demanding visual foreground task, complex unpleasant and neutral background images were displayed in color or grayscale format for a short period of 133 ms and were immediately masked. Such a short presentation poses a challenge for visual processing. In the visual detection task, participants attended to flickering squares that elicited the steady-state visual evoked potential (SSVEP), allowing us to analyze the temporal dynamics of the competition for processing resources in early visual cortex. Concurrently we measured the visual event-related potentials (ERPs) evoked by the unpleasant and neutral background scenes. The results showed (a) that the distraction effect was greater with color than with grayscale images and (b) that it lasted longer with colored unpleasant distractor images. Furthermore, classical and mass-univariate ERP analyses indicated that, when presented in color, emotional scenes elicited more pronounced early negativities (N1-EPN) relative to neutral scenes, than when the scenes were presented in grayscale. Consistent with neural data, unpleasant scenes were rated as being more emotionally negative and received slightly higher arousal values when they were shown in color than when they were presented in grayscale. Taken together, these findings provide evidence for the modulatory role of picture color on a cascade of coordinated perceptual processes: by facilitating the higher-level extraction of emotional content, color influences the duration of the attentional bias to briefly presented affective scenes in lower-tier visual areas.

  7. Top-down preparation modulates visual categorization but not subjective awareness of objects presented in natural backgrounds.

    PubMed

    Koivisto, Mika; Kahila, Ella

    2017-04-01

    Top-down processes are widely assumed to be essential in visual awareness, subjective experience of seeing. However, previous studies have not tried to separate directly the roles of different types of top-down influences in visual awareness. We studied the effects of top-down preparation and object substitution masking (OSM) on visual awareness during categorization of objects presented in natural scene backgrounds. The results showed that preparation facilitated categorization but did not influence visual awareness. OSM reduced visual awareness and impaired categorization. The dissociations between the effects of preparation and OSM on visual awareness and on categorization imply that they influence at different stages of cognitive processing. We propose that preparation influences at the top of the visual hierarchy, whereas OSM interferes with processes occurring at lower levels of the hierarchy. These lower level processes play an essential role in visual awareness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  9. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  10. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  11. The Effects of Verbal Elaboration and Visual Elaboration on Student Learning.

    ERIC Educational Resources Information Center

    Chanlin, Lih-Juan

    1997-01-01

    This study examined: (1) the effectiveness of integrating verbal elaboration (metaphors) and different visual presentation strategies (still and animated graphics) in learning biotechnology concepts; (2) whether the use of verbal elaboration with different visual presentation strategies facilitates cognitive processes; and (3) how students employ…

  12. Childhood visual impairment: normal and abnormal visual function in the context of developmental disability.

    PubMed

    Nyong'o, Omondi L; Del Monte, Monte A

    2008-12-01

    Abnormal or failed development of vision in children may give rise to varying degrees of visual impairment and disability. Disease and organ-specific mechanisms by which visual impairments arise are presented. The presentation of these mechanisms, along with an explanation of established pathologic processes and correlative up-to-date clinical and social research in the field of pediatrics, ophthalmology, and rehabilitation medicine are discussed. The goal of this article is to enhance the practitioner's recognition and care for children with developmental disability associated with visual impairment.

  13. Imagery and Visual Literacy: Selected Readings from the Annual Conference of the International Visual Literacy Association (26th, Tempe, Arizona, October 12-16, 1994).

    ERIC Educational Resources Information Center

    Beauchamp, Darrell G.; And Others

    This document contains selected conference papers all relating to visual literacy. The topics include: process issues in visual literacy; interpreting visual statements; what teachers need to know; multimedia presentations; distance education materials for correctional use; visual culture; audio-visual interaction in desktop multimedia; the…

  14. The effect of linguistic and visual salience in visual world studies.

    PubMed

    Cavicchio, Federica; Melcher, David; Poesio, Massimo

    2014-01-01

    Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.

  15. Electrophysiological evidence for Audio-visuo-lingual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc

    2018-01-31

    Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Progressive posterior cortical dysfunction

    PubMed Central

    Porto, Fábio Henrique de Gobbi; Machado, Gislaine Cristina Lopes; Morillo, Lilian Schafirovits; Brucki, Sonia Maria Dozzi

    2010-01-01

    Progressive posterior cortical dysfunction (PPCD) is an insidious syndrome characterized by prominent disorders of higher visual processing. It affects both dorsal (occipito-parietal) and ventral (occipito-temporal) pathways, disturbing visuospatial processing and visual recognition, respectively. We report a case of a 67-year-old woman presenting with progressive impairment of visual functions. Neurologic examination showed agraphia, alexia, hemispatial neglect (left side visual extinction), complete Balint’s syndrome and visual agnosia. Magnetic resonance imaging showed circumscribed atrophy involving the bilateral parieto-occipital regions, slightly more predominant to the right. Our aim was to describe a case of this syndrome, to present a video showing the main abnormalities, and to discuss this unusual presentation of dementia. We believe this article can contribute by improving the recognition of PPCD. PMID:29213665

  17. Dual Coding in Children.

    ERIC Educational Resources Information Center

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  18. Spatial Working Memory Effects in Early Visual Cortex

    ERIC Educational Resources Information Center

    Munneke, Jaap; Heslenfeld, Dirk J.; Theeuwes, Jan

    2010-01-01

    The present study investigated how spatial working memory recruits early visual cortex. Participants were required to maintain a location in working memory while changes in blood oxygen level dependent (BOLD) signals were measured during the retention interval in which no visual stimulation was present. We show working memory effects during the…

  19. Automating Geospatial Visualizations with Smart Default Renderers for Data Exploration Web Applications

    NASA Astrophysics Data System (ADS)

    Ekenes, K.

    2017-12-01

    This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.

  20. Providing QoS guarantee in 3G wireless networks

    NASA Astrophysics Data System (ADS)

    Chuah, MooiChoo; Huang, Min; Kumar, Suresh

    2001-07-01

    The third generation networks and services present opportunities to offer multimedia applications and services that meet end-to-end quality of service requirements. In this article, we present UMTS QoS architecture and its requirements. This includes the definition of QoS parameters, traffic classes, the end-to-end data delivery model, and the mapping of end-to-end services to the services provided by the network elements of the UMTS. End-to-end QoS of a user flow is achieved by the combination of the QoS control over UMTS Domain and the IP core Network. In the Third Generation Wireless network, UMTS bearer service manager is responsible to manage radio and transport resources to QoS-enabled applications. The UMTS bearer service consists of the Radio Access Bearer Service between Mobile Terminal and SGSN and Core Network bearer service between SGSN and GGSN. The Radio Access Bearer Service is further realized by the Radio Bearer Service (mostly air interface) and Iu bearer service. For the 3G air interface, one can provide differentiated QoS via intelligent burst allocation scheme, adaptive spreading factor control and weighted fair queueing scheduling algorithms. Next, we discuss the requirements for the transport technologies in the radio access network to provide differentiated QoS to multiple classes of traffic. We discuss both ATM based and IP based transport solutions. Last but not least, we discuss how QoS mechanism is provided in the core network to ensure e2e quality of service requirements. We discuss how mobile terminals that use RSVP as QoS signaling mechanisms can be are supported in the 3G network which may implement only IETF diffserv mechanism. . We discuss how one can map UMTS QoS classes with IETF diffserv code points. We also discuss 2G/3G handover scenarios and how the 2G/3G QoS parameters can be mapped.

  1. Visual communication of engineering and scientific data in the courtroom

    NASA Astrophysics Data System (ADS)

    Jackson, Gerald W.; Henry, Andrew C.

    1993-01-01

    Presenting engineering and scientific information in the courtroom is challenging. Quite often the data is voluminous and, therefore, difficult to digest by engineering experts, let alone a lay judge, lawyer, or jury. This paper discusses computer visualization techniques designed to provide the court methods of communicating data in visual formats thus allowing a more accurate understanding of complicated concepts and results. Examples are presented that include accident reconstructions, technical concept illustration, and engineering data visualization. Also presented is the design of an electronic courtroom which facilitates the display and communication of information to the courtroom.

  2. The role of prestimulus activity in visual extinction☆

    PubMed Central

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-01-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398

  3. The role of prestimulus activity in visual extinction.

    PubMed

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-07-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Macular pigment and visual performance in glare: benefits for photostress recovery, disability glare, and visual discomfort.

    PubMed

    Stringham, James M; Garcia, Paul V; Smith, Peter A; McLin, Leon N; Foutch, Brian K

    2011-09-22

    One theory of macular pigment's (MP) presence in the fovea is to improve visual performance in glare. This study sought to determine the effect of MP level on three aspects of visual performance in glare: photostress recovery, disability glare, and visual discomfort. Twenty-six subjects participated in the study. Spatial profiles of MP optical density were assessed with heterochromatic flicker photometry. Glare was delivered via high-bright-white LEDs. For the disability glare and photostress recovery portions of the experiment, the visual task consisted of correct identification of a 1° Gabor patch's orientation. Visual discomfort during the glare presentation was assessed with a visual discomfort rating scale. Pupil diameter was monitored with an infrared (IR) camera. MP level correlated significantly with all the outcome measures. Higher MP optical densities (MPODs) resulted in faster photostress recovery times (average P < 0.003), lower disability glare contrast thresholds (average P < 0.004), and lower visual discomfort (P = 0.002). Smaller pupil diameter during glare presentation significantly correlated with higher visual discomfort ratings (P = 0.037). MP correlates with three aspects of visual performance in glare. Unlike previous studies of MP and glare, the present study used free-viewing conditions, in which effects of iris pigmentation and pupil size could be accounted for. The effects described, therefore, can be extended more confidently to real-world, practical visual performance benefits. Greater iris constriction resulted (paradoxically) in greater visual discomfort. This finding may be attributable to the neurobiologic mechanism that mediates the pain elicited by light.

  5. Visual Communication: Its Process and Effects.

    ERIC Educational Resources Information Center

    Metallinos, Nikos

    The process and effects of visual communication are examined in this paper. The first section, "Visual Literacy," discusses the need for a visual literacy involving an understanding of the instruments, materials, and techniques of visual communication media; it then presents and discusses a model illustrating factors involved in the…

  6. Examining the cognitive demands of analogy instructions compared to explicit instructions.

    PubMed

    Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich

    2016-10-01

    In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.

  7. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    PubMed Central

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  8. A hierarchical, retinotopic proto-organization of the primate visual system at birth

    PubMed Central

    Arcaro, Michael J; Livingstone, Margaret S

    2017-01-01

    The adult primate visual system comprises a series of hierarchically organized areas. Each cortical area contains a topographic map of visual space, with different areas extracting different kinds of information from the retinal input. Here we asked to what extent the newborn visual system resembles the adult organization. We find that hierarchical, topographic organization is present at birth and therefore constitutes a proto-organization for the entire primate visual system. Even within inferior temporal cortex, this proto-organization was already present, prior to the emergence of category selectivity (e.g., faces or scenes). We propose that this topographic organization provides the scaffolding for the subsequent development of visual cortex that commences at the onset of visual experience DOI: http://dx.doi.org/10.7554/eLife.26196.001 PMID:28671063

  9. Visual impairment and spectacle use in schoolchildren in rural and urban regions in Beijing.

    PubMed

    Guo, Yin; Liu, Li Juan; Xu, Liang; Lv, Yan Yun; Tang, Ping; Feng, Yi; Meng, Lei; Jonas, Jost B

    2014-01-01

    To determine prevalence and associations of visual impairment and frequency of spectacle use among grade 1 and grade 4 students in Beijing. This school-based, cross-sectional study included 382 grade 1 children (age 6.3 ± 0.5 years) and 299 grade 4 children (age 9.4 ± 0.7 years) who underwent a comprehensive eye examination including visual acuity, noncycloplegic refractometry, and ocular biometry. Presenting visual acuity (mean 0.04 ± 0.17 logMAR) was associated with younger age (p = 0.002), hyperopic refractive error (p<0.001), and male sex (p = 0.03). Presenting visual impairment (presenting visual acuity ≤20/40 in the better eye) was found in 44 children (prevalence 6.64 ± 1.0% [95% confidence interval (CI) 4.74, 8.54]). Mean best-corrected visual acuity (right eyes -0.02 ± 0.04 logMAR) was associated with more hyperopic refractive error (p = 0.03) and rural region of habitation (p<0.001). The prevalence of best-corrected visual impairment (best-corrected visual acuity ≤20/40 in the better eye) was 2/652 (0.30 ± 0.21% [95% CI 0.00, 0.72]). Undercorrection of refractive error was present in 53 children (7.99 ± 1.05%) and was associated with older age (p = 0.003; B 0.53; OR 1.71 [95% CI 1.20, 2.42]), myopic refractive error (p = 0.001; B -0.72; OR 0.49 [95% CI 0.35, 0.68]), and longer axial length (p = 0.002; B 0.74; OR 2.10 [95% CI 1.32, 3.32]). Spectacle use was reported for 54 children (8.14 ± 1.06%). Mean refractive error of the worse eyes of these children was -2.09 ± 2.88 D (range -7.38 to +7.25 D). Factors associated with presenting visual impairment were older age, myopic refractive error, and higher maternal education level. Despite a prevalence of myopia of 33% in young schoolchildren in Greater Beijing, prevalence of best-corrected visual impairment (0.30% ± 0.21%), presenting visual impairment (6.64% ± 1.0%), and undercorrection of refractive error (7.99% ± 1.05%) were relatively low.

  10. Listeners' expectation of room acoustical parameters based on visual cues

    NASA Astrophysics Data System (ADS)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.

  11. Five-year study of ocular injuries due to fireworks in India.

    PubMed

    Malik, Archana; Bhala, Soniya; Arya, Sudesh K; Sood, Sunandan; Narang, Subina

    2013-08-01

    To study the demographic profile, cause, type and severity of ocular injuries, their complications and final visual outcome following fireworks around the time of Deepawali in India. Case records of patients who presented with firework-related injuries during 2005-2009 at the time of Deepawali were reviewed. Data with respect to demographic profile of patients, cause and time of injury, time of presentation and types of intervention were analyzed. Visual acuity at presentation and final follow-up, anterior and posterior segment findings, and any diagnostic and surgical interventions carried out were noted. One hundred and one patients presented with firework-related ocular injuries, of which 77.5 % were male. The mean age was 17.60 ± 11.9 years, with 54 % being ≤14 years of age. The mean time of presentation was 8.9 h. Seventeen patients had open globe injury (OGI) and 84 had closed globe injury (CGI). Fountains were the most common cause of CGI and bullet bombs were the most common cause of OGI. Mean log MAR visual acuity at presentation was 0.64 and 1.22 and at last follow-up was 0.09 and 0.58 for CGI and OGI, respectively (p < 0.05). Patients with CGI had a better visual outcome. Three patients with OGI developed permanent blindness. Factors associated with poor visual outcome included poor initial visual acuity, OGI, intraocular foreign body (IOFB), retinal detachment and development of endophthalmitis. Firework injuries were seen mostly in males and children. Poor visual outcome was associated with poor initial visual acuity, OGI, IOFB, retinal detachment and development of endophthalmitis, while most patients with CGI regained good vision.

  12. Visualizing without Vision at the Microscale: Students with Visual Impairments Explore Cells with Touch

    ERIC Educational Resources Information Center

    Jones, M. Gail; Minogue, James; Oppewal, Tom; Cook, Michelle P.; Broadwell, Bethany

    2006-01-01

    Science instruction is typically highly dependent on visual representations of scientific concepts that are communicated through textbooks, teacher presentations, and computer-based multimedia materials. Little is known about how students with visual impairments access and interpret these types of visually-dependent instructional materials. This…

  13. Metabolic Pathways Visualization Skills Development by Undergraduate Students

    ERIC Educational Resources Information Center

    dos Santos, Vanessa J. S. V.; Galembeck, Eduardo

    2015-01-01

    We have developed a metabolic pathways visualization skill test (MPVST) to gain greater insight into our students' abilities to comprehend the visual information presented in metabolic pathways diagrams. The test is able to discriminate students' visualization ability with respect to six specific visualization skills that we identified as key to…

  14. Visualization as an Aid to Problem-Solving: Examples from History.

    ERIC Educational Resources Information Center

    Rieber, Lloyd P.

    This paper presents a historical overview of visualization as a human problem-solving tool. Visualization strategies, such as mental imagery, pervade historical accounts of scientific discovery and invention. A selected number of historical examples are presented and discussed on a wide range of topics such as physics, aviation, and the science of…

  15. Effective Engineering Presentations through Teaching Visual Literacy Skills.

    ERIC Educational Resources Information Center

    Kerns, H. Dan; And Others

    This paper describes a faculty resource team in the Bradley University (Illinois) Department of Industrial Engineering that works with student project teams in an effort to improve their visualization and oral presentation skills. Students use state of the art technology to develop and display their visuals. In addition to technology, students are…

  16. Presentation Technology in the Age of Electronic Eloquence: From Visual Aid to Visual Rhetoric

    ERIC Educational Resources Information Center

    Cyphert, Dale

    2007-01-01

    Attention to presentation technology in the public speaking classroom has grown along with its contemporary use, but instruction generally positions the topic as a subset of visual aids. As contemporary public discourse enters an age of electronic eloquence, instructional focus on verbal communication might limit students' capacity to effectively…

  17. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  18. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...

  19. 14 CFR 382.69 - What requirements must carriers meet concerning the accessibility of videos, DVDs, and other...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on-aircraft to... meet concerning the accessibility of videos, DVDs, and other audio-visual presentations shown on... videos, DVDs, and other audio-visual displays played on aircraft for safety purposes, and all such new...

  20. Social Studies for the Visually Impaired Child. MAVIS Sourcebook 4.

    ERIC Educational Resources Information Center

    Singleton, Laurel R.

    Suggestions are made in this sourcebook for adapting teaching strategies and curriculum materials in social studies to accomodate the needs of the visually impaired (VI) student. It is presented in eight chapters. Chapter one explains why elementary grade social studies, with its emphasis on visual media, presents difficulties for VI children.…

  1. Query2Question: Translating Visualization Interaction into Natural Language.

    PubMed

    Nafari, Maryam; Weaver, Chris

    2015-06-01

    Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.

  2. The Presentation: A New Genre in Business Communication.

    ERIC Educational Resources Information Center

    Carney, Thomas F.

    1992-01-01

    Discusses the value and importance of presentation graphics. Deals with using storyboards to design presentations, design principles and construction guidelines, subliminals (overtext, intertextuality, and color), choosing a medium for visuals, choosing a computer program to generate visuals, and design similarities between presentation visuals…

  3. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    PubMed

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  4. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  5. Effects of body lean and visual information on the equilibrium maintenance during stance.

    PubMed

    Duarte, Marcos; Zatsiorsky, Vladimir M

    2002-09-01

    Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.

  6. Changes in the distribution of sustained attention alter the perceived structure of visual space.

    PubMed

    Fortenbaugh, Francesca C; Robertson, Lynn C; Esterman, Michael

    2017-02-01

    Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field. Published by Elsevier Ltd.

  7. Data Visualization and Storytelling: Students Showcasing Innovative Work on the NASA Hyperwall

    NASA Astrophysics Data System (ADS)

    Hankin, E. R.; Hasan, M.; Williams, B. M.; Harwell, D. E.

    2017-12-01

    Visual storytelling can be used to quickly and effectively tell a story about data and scientific research, with powerful visuals driving a deeper level of engagement. In 2016, the American Geophysical Union (AGU) launched a pilot contest with a grant from NASA to fund students to travel to the AGU Fall Meeting to present innovative data visualizations with fascinating stories on the NASA Hyperwall. This presentation will discuss the purpose of the contest and provide highlights. Additionally, the presentation will feature Mejs Hasan, one of the 2016 contest grand prize winners, who will discuss her award-winning research utilizing Landsat visual data, MODIS Enhanced Vegetation Index data, and NOAA nightlight data to study the effects of both drought and war on the Middle East.

  8. CTViz: A tool for the visualization of transport in nanocomposites.

    PubMed

    Beach, Benjamin; Brown, Joshua; Tarlton, Taylor; Derosa, Pedro A

    2016-05-01

    A visualization tool (CTViz) for charge transport processes in 3-D hybrid materials (nanocomposites) was developed, inspired by the need for a graphical application to assist in code debugging and data presentation of an existing in-house code. As the simulation code grew, troubleshooting problems grew increasingly difficult without an effective way to visualize 3-D samples and charge transport in those samples. CTViz is able to produce publication and presentation quality visuals of the simulation box, as well as static and animated visuals of the paths of individual carriers through the sample. CTViz was designed to provide a high degree of flexibility in the visualization of the data. A feature that characterizes this tool is the use of shade and transparency levels to highlight important details in the morphology or in the transport paths by hiding or dimming elements of little relevance to the current view. This is fundamental for the visualization of 3-D systems with complex structures. The code presented here provides these required capabilities, but has gone beyond the original design and could be used as is or easily adapted for the visualization of other particulate transport where transport occurs on discrete paths. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear.

    PubMed

    Willems, Roel M; Clevis, Krien; Hagoort, Peter

    2011-09-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.

  10. Impact of visual acuity on developing literacy at age 4-5 years: a cohort-nested cross-sectional study.

    PubMed

    Bruce, Alison; Fairley, Lesley; Chambers, Bette; Wright, John; Sheldon, Trevor A

    2016-02-16

    To estimate the prevalence of poor vision in children aged 4-5 years and determine the impact of visual acuity on literacy. Cross-sectional study linking clinical, epidemiological and education data. Schools located in the city of Bradford, UK. Prevalence was determined for 11,186 children participating in the Bradford school vision screening programme. Data linkage was undertaken for 5836 Born in Bradford (BiB) birth cohort study children participating both in the Bradford vision screening programme and the BiB Starting Schools Programme. 2025 children had complete data and were included in the multivariable analyses. Visual acuity was measured using a logMAR Crowded Test (higher scores=poorer visual acuity). Literacy measured by Woodcock Reading Mastery Tests-Revised (WRMT-R) subtest: letter identification (standardised). The mean (SD) presenting visual acuity was 0.14 (0.09) logMAR (range 0.0-1.0). 9% of children had a presenting visual acuity worse than 0.2logMAR (failed vision screening), 4% worse than 0.3logMAR (poor visual acuity) and 2% worse than 0.4logMAR (visually impaired). Unadjusted analysis showed that the literacy score was associated with presenting visual acuity, reducing by 2.4 points for every 1 line (0.10logMAR) reduction in vision (95% CI -3.0 to -1.9). The association of presenting visual acuity with the literacy score remained significant after adjustment for demographic and socioeconomic factors reducing by 1.7 points (95% CI -2.2 to -1.1) for every 1 line reduction in vision. Prevalence of decreased visual acuity was high compared with other population-based studies. Decreased visual acuity at school entry is associated with reduced literacy. This may have important implications for the children's future educational, health and social outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. Clinical characteristics in 53 patients with cat scratch optic neuropathy.

    PubMed

    Chi, Sulene L; Stinnett, Sandra; Eggenberger, Eric; Foroozan, Rod; Golnik, Karl; Lee, Michael S; Bhatti, M Tariq

    2012-01-01

    To describe the clinical manifestations and to identify risk factors associated with visual outcome in a large cohort of patients with cat scratch optic neuropathy (CSON). Multicenter, retrospective chart review. Fifty-three patients (62 eyes) with serologically positive CSON from 5 academic neuro-ophthalmology services evaluated over an 11-year period. Institutional review board/ethics committee approval was obtained. Data from medical record charts were collected to detail the clinical manifestations and to analyze visual outcome metrics. Generalized estimating equations and logistic regression analysis were used in the statistical analysis. Six patients (9 eyes) were excluded from visual outcome statistical analysis because of a lack of follow-up. Demographic information, symptoms at presentation, clinical characteristics, length of follow-up, treatment used, and visual acuity (at presentation and final follow-up). Mean patient age was 27.8 years (range, 8-65 years). Mean follow-up time was 170.8 days (range, 1-1482 days). Simultaneous bilateral involvement occurred in 9 (17%) of 53 patients. Visual acuity on presentation ranged from 20/20 to counting fingers (mean, 20/160). Sixty-eight percent of eyes retained a visual acuity of 20/40 or better at final follow-up (defined as favorable visual outcome). Sixty-seven percent of patients endorsed a history of cat or kitten scratch. Neuroretinitis (macular star) developed in 28 eyes (45%). Only 5 patients had significant visual complications (branch retinal artery occlusion, macular hole, and corneal decompensation). Neither patient age nor any other factor except good initial visual acuity and absence of systemic symptoms was associated with a favorable visual outcome. There was no association between visual acuity at final follow-up and systemic antibiotic or steroid use. Patients with CSON have a good overall visual prognosis. Good visual acuity at presentation was associated with a favorable visual outcome. The absence of a macular star does not exclude the possibility of CSON. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  12. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  13. Profiling Oman education data using data visualization technique

    NASA Astrophysics Data System (ADS)

    Alalawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd

    2016-10-01

    This research works presents an innovative data visualization technique to understand and visualize the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". The Ministry of Education in Sultanate of Oman have huge databases contains massive information. The volume of data in the database increase yearly as many students, teachers and employees enter into the database. The task for discovering and analyzing these vast volumes of data becomes increasingly difficult. Information visualization and data mining offer a better ways in dealing with large volume of information. In this paper, an innovative information visualization technique is developed to visualize the complex multidimensional educational data. Microsoft Excel Dashboard, Visual Basic Application (VBA) and Pivot Table are utilized to visualize the data. Findings from the summarization of the data are presented, and it is argued that information visualization can help related stakeholders to become aware of hidden and interesting information from large amount of data drowning in their educational portal.

  14. Visual impairment and traits of autism in children.

    PubMed

    Wrzesińska, Magdalena; Kapias, Joanna; Nowakowska-Domagała, Katarzyna; Kocur, Józef

    2017-04-30

    Visual impairment present from birth or from an early childhood may lead to psychosocial and emotional disorders. 11-40% of children in the group with visual impairment show traits of autism. The aim of this paper was to present the selected examples of how visual impairment in children is related to the occurrence of autism and to describe the available tools for diagnosing autism in children with visual impairment. So far the relation between visual impairment in children and autism has not been sufficiently confirmed. Psychiatric and psychological diagnosis of children with visual impairment has some difficulties in differentiating between "blindism" and traits typical for autism resulting from a lack of standardized diagnostic tools used to diagnosing children with visual impairment. Another difficulty in diagnosing autism in children with visual impairment is the coexistence of other disabilities in case of most children with vision impairment. Additionally, apart from difficulties in diagnosing autistic disorders in children with eye dysfunctions there is also a question of what tools should be used in therapy and rehabilitation of patients.

  15. The use of visual cues in gravity judgements on parabolic motion.

    PubMed

    Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan

    2018-06-21

    Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.

  16. Verifying visual properties in sentence verification facilitates picture recognition memory.

    PubMed

    Pecher, Diane; Zanolie, Kiki; Zeelenberg, René

    2007-01-01

    According to the perceptual symbols theory (Barsalou, 1999), sensorimotor simulations underlie the representation of concepts. We investigated whether recognition memory for pictures of concepts was facilitated by earlier representation of visual properties of those concepts. During study, concept names (e.g., apple) were presented in a property verification task with a visual property (e.g., shiny) or with a nonvisual property (e.g., tart). Delayed picture recognition memory was better if the concept name had been presented with a visual property than if it had been presented with a nonvisual property. These results indicate that modality-specific simulations are used for concept representation.

  17. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.

    PubMed

    van Hoesel, Richard J M

    2015-04-01

    One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.

  19. The role of top-down spatial attention in contingent attentional capture.

    PubMed

    Huang, Wanyi; Su, Yuling; Zhen, Yanfen; Qu, Zhe

    2016-05-01

    It is well known that attentional capture by an irrelevant salient item is contingent on top-down feature selection, but whether attentional capture may be modulated by top-down spatial attention remains unclear. Here, we combined behavioral and ERP measurements to investigate the contribution of top-down spatial attention to attentional capture under modified spatial cueing paradigms. Each target stimulus was preceded by a peripheral circular cue array containing a spatially uninformative color singleton cue. We varied target sets but kept the cue array unchanged among different experimental conditions. When participants' task was to search for a colored letter in the target array that shared the same peripheral locations with the cue array, attentional capture by the peripheral color cue was reflected by both a behavioral spatial cueing effect and a cue-elicited N2pc component. When target arrays were presented more centrally, both the behavioral and N2pc effects were attenuated but still significant. The attenuated cue-elicited N2pc was found even when participants focused their attention on the fixed central location to identify a colored letter among an RSVP letter stream. By contrast, when participants were asked to identify an outlined or larger target, neither the behavioral spatial cueing effect nor the cue-elicited N2pc was observed, regardless of whether the target and cue arrays shared same locations or not. These results add to the evidence that attentional capture by salient stimuli is contingent upon feature-based task sets, and further indicate that top-down spatial attention is important but may not be necessary for contingent attentional capture. © 2016 Society for Psychophysiological Research.

  20. Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Hack, Zarita Caplan; Erber, Norman P.

    1982-01-01

    Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…

  1. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    ERIC Educational Resources Information Center

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  2. Helping Children with Visual and Motor Impairments Make the Most of Their Visual Abilities.

    ERIC Educational Resources Information Center

    Amerson, Marie J.

    1999-01-01

    Lists strategies for promoting functional vision use in children with visual and motor impairments, including providing postural stability, presenting visual attention tasks when energy level is the highest, using a slanted work surface, placing target items in varied locations within reach, and determining the most effective visual adaptations.…

  3. Qualitative Differences in the Representation of Abstract versus Concrete Words: Evidence from the Visual-World Paradigm

    ERIC Educational Resources Information Center

    Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel

    2009-01-01

    In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…

  4. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  5. Ingredients to Successful Students Presentations: It's More Than Just a Sum of Raw Materials.

    ERIC Educational Resources Information Center

    Kerns, H. Dan; Johnson, Nial

    Recognizing the decline in student visual communication skills, faculty from different disciplines collaborated in the design of a visual literacy course. The visual literacy skills developed in the course are that students learn in the following ways: (1) through faculty presentation and demonstration of the various tools available; (2) with…

  6. Effect of microgravity on visual contrast threshold during STS Shuttle missions: Visual Function Tester-Model 2 (VFT-2)

    NASA Technical Reports Server (NTRS)

    Oneal, Melvin R.; Task, H. Lee; Genco, Louis V.

    1992-01-01

    Viewgraphs on effect of microgravity on visual contrast threshold during STS shuttle missions are presented. The purpose, methods, and results are discussed. The visual function tester model 2 is used.

  7. The prevalence and causes of visual impairment in seven-year-old children.

    PubMed

    Ghaderi, Soraya; Hashemi, Hassan; Jafarzadehpur, Ebrahim; Yekta, Abbasali; Ostadimoghaddam, Hadi; Mirzajani, Ali; Khabazkhoob, Mehdi

    2018-05-01

    To report the prevalence and causes of visual impairment in seven-year-old children in Iran and its relationship with socio-economic conditions. In a cross-sectional population-based study, first-grade students in the primary schools of eight cities in the country were randomly selected from different geographic locations using multistage cluster sampling. The examinations included visual acuity measurement, ocular motility evaluation, and cycloplegic and non-cycloplegic refraction. Using the definitions of the World Health Organization (presenting visual acuity less than or equal to 6/18 in the better eye) to estimate the prevalence of vision impairment, the present study reported presenting visual impairment in seven-year-old children. Of 4,614 selected students, 4,106 students participated in the study (response rate 89 per cent), of whom 2,127 (51.8 per cent) were male. The prevalence of visual impairment according to a visual acuity of 6/18 was 0.341 per cent (95 per cent confidence interval 0.187-0.571); 1.34 per cent (95 per cent confidence interval 1.011-1.74) of children had visual impairment according to a visual acuity of 6/18 in at least one eye. Sixty-six (1.6 per cent) and 23 (0.24 per cent) children had visual impairment according to a visual acuity of 6/12 in the worse and better eye, respectively. The most common causes of visual impairment were refractive errors (81.8 per cent) and amblyopia (14.5 per cent). Among different types of refractive errors, astigmatism was the main refractive error leading to visual impairment. According to the concentration index, the distribution of visual impairment in children from low-income families was higher. This study revealed a high prevalence of visual impairment in a representative sample of seven-year-old Iranian children. Astigmatism and amblyopia were the most common causes of visual impairment. The distribution of visual impairment was higher in children from low-income families. Cost-effective strategies are needed to address these easily treatable causes of visual impairment. © 2017 Optometry Australia.

  8. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    PubMed

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  9. Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097

  10. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.

  11. Achieving fast and stable failure detection in WDM Networks

    NASA Astrophysics Data System (ADS)

    Gao, Donghui; Zhou, Zhiyu; Zhang, Hanyi

    2005-02-01

    In dynamic networks, the failure detection time takes a major part of the convergence time, which is an important network performance index. To detect a node or link failure in the network, traditional protocols, like Hello protocol in OSPF or RSVP, exchanges keep-alive messages between neighboring nodes to keep track of the link/node state. But by default settings, it can get a minimum detection time in the measure of dozens of seconds, which can not meet the demands of fast network convergence and failure recovery. When configuring the related parameters to reduce the detection time, there will be notable instability problems. In this paper, we analyzed the problem and designed a new failure detection algorithm to reduce the network overhead of detection signaling. Through our experiment we found it is effective to enhance the stability by implicitly acknowledge other signaling messages as keep-alive messages. We conducted our proposal and the previous approaches on the ASON test-bed. The experimental results show that our algorithm gives better performances than previous schemes in about an order magnitude reduction of both false failure alarms and queuing delay to other messages, especially under light traffic load.

  12. Preference Versus Choice in Online Dating.

    PubMed

    Whyte, Stephen; Torgler, Benno

    2017-03-01

    This study explores factors that influence matches of online dating participants' stated preference for particular characteristics in a potential partner and compares these with the characteristics of the online daters actually contacted. The nature of online dating facilitates exploration of the differences between stated preference and actual choice by participants, as online daters willingly provide a range of demographics on their ideal partner. Using data from the Australian dating website RSVP, we analyze 219,013 contact decisions. We conduct a multivariate analysis using the number of matched variables between the participants' stated preference and the characteristics of the individuals contacted. We find that factors such as a person's age, their education level, and a more social personality all increase the number of factors they choose in a potential partner that match their original stated preference. Males (relative to females) appear to match fewer characteristics when contacting potential love interests. Conversely, age interaction effects demonstrate that males in their late 60's are increasingly more selective (than females) regarding who they contact. An understanding of how technology (the Internet) is impacting human mating patterns and the psychology behind the participants informs the wider social science of human behavior in large-scale decision settings.

  13. Quality of service policy control in virtual private networks

    NASA Astrophysics Data System (ADS)

    Yu, Yiqing; Wang, Hongbin; Zhou, Zhi; Zhou, Dongru

    2004-04-01

    This paper studies the QoS of VPN in an environment where the public network prices connection-oriented services based on source, destination and grade of service, and advertises these prices to its VPN customers (users). As different QoS technologies can produce different QoS, there are according different traffic classification rules and priority rules. The internet service provider (ISP) may need to build complex mechanisms separately for each node. In order to reduce the burden of network configuration, we need to design policy control technologies. We considers mainly directory server, policy server, policy manager and policy enforcers. Policy decision point (PDP) decide its control according to policy rules. In network, policy enforce point (PEP) decide its network controlled unit. For InterServ and DiffServ, we will adopt different policy control methods as following: (1) In InterServ, traffic uses resource reservation protocol (RSVP) to guarantee the network resource. (2) In DiffServ, policy server controls the DiffServ code points and per hop behavior (PHB), its PDP distributes information to each network node. Policy server will function as following: information searching; decision mechanism; decision delivering; auto-configuration. In order to prove the effectiveness of QoS policy control, we make the corrective simulation.

  14. Radical “Visual Capture” Observed in a Patient with Severe Visual Agnosia

    PubMed Central

    Takaiwa, Akiko; Yoshimura, Hirokazu; Abe, Hirofumi; Terai, Satoshi

    2003-01-01

    We report the case of a 79-year-old female with visual agnosia due to brain infarction in the left posterior cerebral artery. She could recognize objects used in daily life rather well by touch (the number of objects correctly identified was 16 out of 20 presented objects), but she could not recognize them as well by vision (6 out of 20). In this case, it was expected that she would recognize them well when permitted to use touch and vision simultaneously. Our patient, however, performed poorly, producing 5 correct answers out of 20 in the Vision-and-Touch condition. It would be natural to think that visual capture functions when vision and touch provide contradictory information on concrete positions and shapes. However, in the present case, it functioned in spite of the visual deficit in recognizing objects. This should be called radical visual capture. By presenting detailed descriptions of her symptoms and neuropsychological and neuroradiological data, we clarify the characteristics of this type of capture. PMID:12719638

  15. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Visual Analytic Judgments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik

    2017-05-08

    Scientists often use specific data analysis and presentation methods familiar within their domain. But does high familiarity drive better analytical judgment? This question is especially relevant when familiar methods themselves can have shortcomings: many visualizations used conventionally for scientific data analysis and presentation do not follow established best practices. This necessitates new methods that might be unfamiliar yet prove to be more effective. But there is little empirical understanding of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their visual analytic judgments. To address this gap and to study these factors, we focusmore » on visualizations used for comparison of climate model performance. We report on a comprehensive survey-based user study with 47 climate scientists and present an analysis of : i) relationships among scientists’ familiarity, their perceived lev- els of comfort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less

  16. Improvement of visual acuity by refraction in a low-vision population.

    PubMed

    Sunness, Janet S; El Annan, Jaafar

    2010-07-01

    Refraction often may be overlooked in low-vision patients, because the main cause of vision decrease is not refractive, but rather is the result of underlying ocular disease. This retrospective study was carried out to determine how frequently and to what extent visual acuity is improved by refraction in a low-vision population. Cross-sectional study. Seven hundred thirty-nine low-vision patients seen for the first time. A database with all new low-vision patients seen from November 2005 through June 2008 recorded presenting visual acuity using an Early Treatment Diabetic Retinopathy Study chart; it also recorded the best-corrected visual acuity (BCVA) if it was 2 lines or more better than the presenting visual acuity. Retinoscopy was carried out on all patients, followed by manifest refraction. Improvement in visual acuity. Median presenting acuity was 20/80(-2) (interquartile range, 20/50-20/200). There was an improvement of 2 lines or more of visual acuity in 81 patients (11% of all patients), with 22 patients (3% of all patients) improving by 4 lines or more. There was no significant difference in age or in presenting visual acuity between the group that did not improve by refraction and the group that did improve. When stratified by diagnosis, the only 2 diagnoses with a significantly higher rate of improvement than the age-related macular degeneration group were myopic degeneration and progressive myopia (odds ratio, 4.8; 95% confidence interval [CI], 3.0-6.7) and status post-retinal detachment (odds ratio, 7.1; 95% CI, 5.2-9.0). For 5 patients (6% of those with improvement), the eye that was 1 line or more worse than the fellow eye at presentation became the eye that was 1 line or more better than the fellow eye after refraction. A significant improvement in visual acuity was attained by refraction in 11% of the new low-vision patients. Improvement was seen across diagnoses and the range of presenting visual acuity. The worse-seeing eye at presentation may become the better-seeing eye after refraction, so that the eye behind a balance lens should be refracted as well. Proprietary or commercial disclosure may be found after the references. Copyright 2010 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  17. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.

  18. Effect of Microgravity on Several Visual Functions During STS Shuttle Missions: Visual Function Tester-model 1 (VFT-1)

    NASA Technical Reports Server (NTRS)

    Oneal, Melvin R.; Task, H. Lee; Genco, Louis V.

    1992-01-01

    Viewgraphs on the effect of microgravity on several visual functions during STS shuttle missions are presented. The purpose, methods, results, and discussion are discussed. The visual function tester model 1 is used.

  19. The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2015-01-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774

  20. The effects of varying contextual demands on age-related positive gaze preferences.

    PubMed

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  1. DVA as a Diagnostic Test for Vestibulo-Ocular Reflex Function

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Appelbaum, Meghan

    2010-01-01

    The vestibulo-ocular reflex (VOR) stabilizes vision on earth-fixed targets by eliciting eyes movements in response to changes in head position. How well the eyes perform this task can be functionally measured by the dynamic visual acuity (DVA) test. We designed a passive, horizontal DVA test to specifically study the acuity and reaction time when looking in different target locations. Visual acuity was compared among 12 subjects using a standard Landolt C wall chart, a computerized static (no rotation) acuity test and dynamic acuity test while oscillating at 0.8 Hz (+/-60 deg/s). In addition, five trials with yaw oscillation randomly presented a visual target in one of nine different locations with the size and presentation duration of the visual target varying across trials. The results showed a significant difference between the static and dynamic threshold acuities as well as a significant difference between the visual targets presented in the horizontal plane versus those in the vertical plane when comparing accuracy of vision and reaction time of the response. Visual acuity increased proportional to the size of the visual target and increased between 150 and 300 msec duration. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of rotation. This DVA test could be used as a functional diagnostic test for visual-vestibular and neuro-cognitive impairments by assessing both accuracy and reaction time to acquire visual targets.

  2. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  3. Frequency-band signatures of visual responses to naturalistic input in ferret primary visual cortex during free viewing.

    PubMed

    Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio

    2015-02-19

    Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. [Ventriloquism and audio-visual integration of voice and face].

    PubMed

    Yokosawa, Kazuhiko; Kanaya, Shoko

    2012-07-01

    Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.

  5. Auditory emotional cues enhance visual perception.

    PubMed

    Zeelenberg, René; Bocanegra, Bruno R

    2010-04-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.

  6. Supporting Visual Literacy in the School Library Media Center: Developmental, Socio-Cultural, and Experiential Considerations and Scenarios

    ERIC Educational Resources Information Center

    Cooper, Linda Z.

    2008-01-01

    Children are natural visual learners--they have been absorbing information visually since birth. They welcome opportunities to learn via images as well as to generate visual information themselves, and these opportunities present themselves every day. The importance of visual literacy can be conveyed through conversations and the teachable moment,…

  7. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  8. Brief Report: Vision in Children with Autism Spectrum Disorder: What Should Clinicians Expect?

    ERIC Educational Resources Information Center

    Anketell, Pamela M.; Saunders, Kathryn J.; Gallagher, Stephen M.; Bailey, Clare; Little, Julie-Anne

    2015-01-01

    Anomalous visual processing has been described in individuals with autism spectrum disorder (ASD) but relatively few studies have profiled visual acuity (VA) in this population. The present study describes presenting VA in children with ASD (n = 113) compared to typically developing controls (n = 206) and best corrected visual acuity (BCVA) in a…

  9. Effect of Visual Field Presentation on Action Planning (Estimating Reach) in Children

    ERIC Educational Resources Information Center

    Gabbard, Carl; Cordova, Alberto

    2012-01-01

    In this article, the authors examined the effects of target information presented in different visual fields (lower, upper, central) on estimates of reach via use of motor imagery in children (5-11 years old) and young adults. Results indicated an advantage for estimating reach movements for targets placed in lower visual field (LoVF), with all…

  10. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    ERIC Educational Resources Information Center

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  11. Evaluation of a visual risk communication tool: effects on knowledge and perception of blood transfusion risk.

    PubMed

    Lee, D H; Mehta, M D

    2003-06-01

    Effective risk communication in transfusion medicine is important for health-care consumers, but understanding the numerical magnitude of risks can be difficult. The objective of this study was to determine the effect of a visual risk communication tool on the knowledge and perception of transfusion risk. Laypeople were randomly assigned to receive transfusion risk information with either a written or a visual presentation format for communicating and comparing the probabilities of transfusion risks relative to other hazards. Knowledge of transfusion risk was ascertained with a multiple-choice quiz and risk perception was ascertained by psychometric scaling and principal components analysis. Two-hundred subjects were recruited and randomly assigned. Risk communication with both written and visual presentation formats increased knowledge of transfusion risk and decreased the perceived dread and severity of transfusion risk. Neither format changed the perceived knowledge and control of transfusion risk, nor the perceived benefit of transfusion. No differences in knowledge or risk perception outcomes were detected between the groups randomly assigned to written or visual presentation formats. Risk communication that incorporates risk comparisons in either written or visual presentation formats can improve knowledge and reduce the perception of transfusion risk in laypeople.

  12. Effects of visual span on reading speed and parafoveal processing in eye movements during sentence reading.

    PubMed

    Risse, Sarah

    2014-07-15

    The visual span (or ‘‘uncrowded window’’), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers’ VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. © 2014 ARVO.

  13. Light Video Game Play is Associated with Enhanced Visual Processing of Rapid Serial Visual Presentation Targets.

    PubMed

    Howard, Christina J; Wilding, Robert; Guest, Duncan

    2017-02-01

    There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.

  14. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  15. View-Dependent Streamline Deformation and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Xin; Edwards, John; Chen, Chun-Ming

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual cluttering for visualizing 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures.more » Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.« less

  16. Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm

    PubMed Central

    Kukona, Anuenue; Tabor, Whitney

    2011-01-01

    The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355

  17. Evaluation of Visualization Software

    NASA Technical Reports Server (NTRS)

    Globus, Al; Uselton, Sam

    1995-01-01

    Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.

  18. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE PAGES

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...

    2018-02-14

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  19. Helmet-mounted display systems for flight simulation

    NASA Technical Reports Server (NTRS)

    Haworth, Loren A.; Bucher, Nancy M.

    1989-01-01

    Simulation scientists are continually improving simulation technology with the goal of more closely replicating the physical environment of the real world. The presentation or display of visual information is one area in which recent technical improvements have been made that are fundamental to conducting simulated operations close to the terrain. Detailed and appropriate visual information is especially critical for nap-of-the-earth helicopter flight simulation where the pilot maintains an 'eyes-out' orientation to avoid obstructions and terrain. This paper describes visually coupled wide field of view helmet-mounted display (WFOVHMD) system technology as a viable visual presentation system for helicopter simulation. Tradeoffs associated with this mode of presentation as well as research and training applications are discussed.

  20. The ophthalmic natural history of paediatric craniopharyngioma: a long-term review.

    PubMed

    Drimtzias, Evangelos; Falzon, Kevin; Picton, Susan; Jeeva, Irfan; Guy, Danielle; Nelson, Olwyn; Simmons, Ian

    2014-12-01

    We present our experience over the long-term of monitoring of visual function in children with craniopharyngioma. Our study involves an analysis of all paediatric patients with craniopharyngioma younger than 16 at the time of diagnosis and represents a series of predominantly sub-totally resected tumours. Visual data, of multiple modality, of the paediatric patients was collected. Twenty patients were surveyed. Poor prognostic indicators of the visual outcome and rate of recurrence were assessed. Severe visual loss and papilledema at the time of diagnosis were more common in children under the age of 6. In our study visual signs, tumour calcification and optic disc atrophy at presentation are predictors of poor visual outcome with the first two applying only in children younger than 6. In contrast with previous reports, preoperative visual field (VF) defects and type of surgery were not documented as prognostic indicators of poor postoperative visual acuity (VA) and VF. Contrary to previous reports calcification at diagnosis, type of surgery and preoperative VF defects were not found to be associated with tumour recurrence. Local recurrence is common. Younger age at presentation is associated with a tendency to recur. Magnetic resonance imaging (MRI) remains the recommended means of follow-up in patients with craniopharyngioma.

  1. Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?

    PubMed

    McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh

    2014-05-01

    Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.

  2. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  3. Visual speech perception in foveal and extrafoveal vision: further implications for divisions in hemispheric projections.

    PubMed

    Jordan, Timothy R; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B

    2014-01-01

    When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.

  4. Serial and semantic encoding of lists of words in schizophrenia patients with visual hallucinations.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2011-03-30

    Previous research has suggested that visual hallucinations in schizophrenia are associated with abnormal salience of visual mental images. Since visual imagery is used as a mnemonic strategy to learn lists of words, increased visual imagery might impede the other commonly used strategies of serial and semantic encoding. We had previously published data on the serial and semantic strategies implemented by patients when learning lists of concrete words with different levels of semantic organisation (Brébion et al., 2004). In this paper we present a re-analysis of these data, aiming at investigating the associations between learning strategies and visual hallucinations. Results show that the patients with visual hallucinations presented less serial clustering in the non-organisable list than the other patients. In the semantically organisable list with typical instances, they presented both less serial and less semantic clustering than the other patients. Thus, patients with visual hallucinations demonstrate reduced use of serial and semantic encoding in the lists made up of fairly familiar concrete words, which enable the formation of mental images. Although these results are preliminary, we propose that this different processing of the lists stems from the abnormal salience of the mental images such patients experience from the word stimuli. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear

    PubMed Central

    Clevis, Krien; Hagoort, Peter

    2011-01-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540

  6. Invisible Mars: New Visuals for Communicating MAVEN's Story

    NASA Astrophysics Data System (ADS)

    Shupla, C. B.; Ali, N. A.; Jones, A. P.; Mason, T.; Schneider, N. M.; Brain, D. A.; Blackwell, J.

    2016-12-01

    Invisible Mars tells the story of Mars' evolving atmosphere, through a script and a series of visuals as a live presentation. Created for Science-On-A-Sphere, the presentation has also been made available to planetariums, and is being expanded to other platforms. The script has been updated to include results from the Mars Atmosphere and Volatile Evolution Mission (MAVEN), and additional visuals have been produced. This poster will share the current Invisible Mars resources available and the plans to further disseminate this presentation.

  7. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  8. Visualization of regulations to support design and quality control--a long-term study.

    PubMed

    Blomé, Mikael

    2012-01-01

    The aim of the study was to visualize design regulations of furniture by means of interactive technology based on earlier studies and practical examples. The usage of the visualized regulations was evaluated on two occasions: at the start when the first set of regulations was presented, and after six years of usage of all regulations. The visualized regulations were the result of a design process involving experts and potential users in collaboration with IKEA of Sweden AB. The evaluations by the different users showed a very positive response to using visualized regulations. The participative approach, combining expertise in specific regulations with visualization of guidelines, resulted in clear presentations of important regulations, and great attitudes among the users. These kinds of visualizations have proved to be applicable in a variety of product areas at IKEA, with a potential for further dissemination. It is likely that the approaches to design and visualized regulations in this case study could function in other branches.

  9. Fragile visual short-term memory is an object-based and location-specific store.

    PubMed

    Pinto, Yaïr; Sligte, Ilja G; Shapiro, Kimron L; Lamme, Victor A F

    2013-08-01

    Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to unveil the functional underpinnings of this memory storage. We found that FM is only completely erased when the new visual scene appears at the same location and consists of the same objects as the to-be-recalled information. This result has two important implications: First, it shows that FM is an object- and location-specific store, and second, it suggests that FM might be used in everyday life when the presentation of visual information is appropriately designed.

  10. The schemes and methods for producing of the visual security features used in the color hologram stereography

    NASA Astrophysics Data System (ADS)

    Lushnikov, D. S.; Zherdev, A. Y.; Odinokov, S. B.; Markin, V. V.; Smirnov, A. V.

    2017-05-01

    Visual security elements used in color holographic stereograms - three-dimensional colored security holograms - and methods their production is describes in this article. These visual security elements include color micro text, color-hidden image, the horizontal and vertical flip - flop effects by change color and image. The article also presents variants of optical systems that allow record the visual security elements as part of the holographic stereograms. The methods for solving of the optical problems arising in the recording visual security elements are presented. Also noted perception features of visual security elements for verification of security holograms by using these elements. The work was partially funded under the Agreement with the RF Ministry of Education and Science № 14.577.21.0197, grant RFMEFI57715X0197.

  11. Effects of visual attention on chromatic and achromatic detection sensitivities.

    PubMed

    Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko

    2014-05-01

    Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.

  12. Default Mode Network (DMN) Deactivation during Odor-Visual Association

    PubMed Central

    Karunanayaka, Prasanna R.; Wilson, Donald A.; Tobia, Michael J.; Martinez, Brittany; Meadowcroft, Mark; Eslinger, Paul J.; Yang, Qing X.

    2017-01-01

    Default mode network (DMN) deactivation has been shown to be functionally relevant for goal-directed cognition. In this study, we investigated the DMN’s role during olfactory processing using two complementary functional magnetic resonance imaging (fMRI) paradigms with identical timing, visual-cue stimulation and response monitoring protocols. Twenty-nine healthy, non-smoking, right-handed adults (mean age = 26±4 yrs., 16 females) completed an odor-visual association fMRI paradigm that had two alternating odor+visual and visual-only trial conditions. During odor+visual trials, a visual cue was presented simultaneously with an odor, while during visual-only trial conditions the same visual cue was presented alone. Eighteen of the 29 participants (mean age = 27.0 ± 6.0 yrs.,11 females) also took part in a control no-odor fMRI paradigm that consisted of visual-only trial conditions which were identical to the visual-only trials in the odor-visual association paradigm. We used Independent Component Analysis (ICA), extended unified structural equation modeling (euSEM), and psychophysiological interaction (PPI) to investigate the interplay between the DMN and olfactory network. In the odor-visual association paradigm, DMN deactivation was evoked by both the odor+visual and visual-only trial conditions. In contrast, the visual-only trials in the no-odor paradigm did not evoke consistent DMN deactivation. In the odor-visual association paradigm, the euSEM and PPI analyses identified a directed connectivity between the DMN and olfactory network which was significantly different between odor+visual and visual-only trial conditions. The results support a strong interaction between the DMN and olfactory network and highlights DMN’s role in task-evoked brain activity and behavioral responses during olfactory processing. PMID:27785847

  13. Contingent capture of involuntary visual spatial attention does not differ between normally hearing children and proficient cochlear implant users.

    PubMed

    Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill

    2014-01-01

    Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.

  14. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  15. Does Seeing Ice Really Feel Cold? Visual-Thermal Interaction under an Illusory Body-Ownership

    PubMed Central

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed. PMID:23144814

  16. Does seeing ice really feel cold? Visual-thermal interaction under an illusory body-ownership.

    PubMed

    Kanaya, Shoko; Matsushima, Yuka; Yokosawa, Kazuhiko

    2012-01-01

    Although visual information seems to affect thermal perception (e.g. red color is associated with heat), previous studies have failed to demonstrate the interaction between visual and thermal senses. However, it has been reported that humans feel an illusory thermal sensation in conjunction with an apparently-thermal visual stimulus placed on a prosthetic hand in the rubber hand illusion (RHI) wherein an individual feels that a prosthetic (rubber) hand belongs to him/her. This study tests the possibility that the ownership of the body surface on which a visual stimulus is placed enhances the likelihood of a visual-thermal interaction. We orthogonally manipulated three variables: induced hand-ownership, visually-presented thermal information, and tactically-presented physical thermal information. Results indicated that the sight of an apparently-thermal object on a rubber hand that is illusorily perceived as one's own hand affects thermal judgments about the object physically touching this hand. This effect was not observed without the RHI. The importance of ownership of a body part that is touched by the visual object on the visual-thermal interaction is discussed.

  17. Mobile device geo-localization and object visualization in sensor networks

    NASA Astrophysics Data System (ADS)

    Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.

  18. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  19. Purtscher's retinopathy associated with acute pancreatitis.

    PubMed

    Hamp, Ania M; Chu, Edward; Slagle, William S; Hamp, Robert C; Joy, Jeffrey T; Morris, Robert W

    2014-02-01

    Purtscher's retinopathy is a rare condition that is associated with complement-activating systemic diseases such as acute pancreatitis. After pancreatic injury or inflammation, proteases such as trypsin activate the complement system and can potentially cause coagulation and leukoembolization of retinal precapillary arterioles. Specifically, intermediate-sized emboli are sufficiently small enough to pass through larger arteries yet large enough to remain lodged in precapillary arterioles and cause the clinical appearance of Purtscher's retinopathy. This pathology may present with optic nerve edema, impaired visual acuity, visual field loss, as well as retinal findings such as cotton-wool spots, retinal hemorrhage, artery attenuation, venous dilation, and Purtscher flecken. A 57-year-old white man presented with an acute onset of visual field scotomas and decreased visual acuity 1 week after being hospitalized for acute pancreatitis. The retinal examination revealed multiple regions of discrete retinal whitening surrounding the disk, extending through the macula bilaterally, as well as bilateral optic nerve hemorrhages. The patient identified paracentral bilateral visual field defects on Amsler Grid testing, which was confirmed with subsequent Humphrey visual field analysis. Although the patient presented with an atypical underlying etiology, he exhibited classic retinal findings for Purtscher's retinopathy. After 2 months, best corrected visual acuity improved and the retinal whitening was nearly resolved; however, bilateral paracentral visual field defects remained. Purtscher's retinopathy has a distinctive clinical presentation and is typically associated with thoracic trauma but may be a sequela of nontraumatic systemic disease such as acute pancreatitis. Patients diagnosed with acute pancreatitis should have an eye examination to rule out Purtscher's retinopathy. Although visual improvement is possible, patients should be educated that there may be permanent ocular sequelae.

  20. Visual Working Memory Is Independent of the Cortical Spacing Between Memoranda.

    PubMed

    Harrison, William J; Bays, Paul M

    2018-03-21

    The sensory recruitment hypothesis states that visual short-term memory is maintained in the same visual cortical areas that initially encode a stimulus' features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short-term memory is similarly constrained by the cortical spacing of memory items. Here, we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, presenting memoranda in peripheral vision sequentially along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we presented memoranda sequentially either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behavior of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas such as posterior parietal or prefrontal regions or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. SIGNIFICANCE STATEMENT Although much is known about the resolution with which we can remember visual objects, the cortical representation of items held in short-term memory remains contentious. A popular hypothesis suggests that memory of visual features is maintained via the recruitment of the same neural architecture in sensory cortex that encodes stimuli. We investigated this claim by manipulating the spacing in visual cortex between sequentially presented memoranda such that some items shared cortical representations more than others while preventing perceptual interference between stimuli. We found clear evidence that short-term memory is independent of the intracortical spacing of memoranda, revealing a dissociation between perceptual and memory representations. Our data indicate that working memory relies on different neural mechanisms from sensory perception. Copyright © 2018 Harrison and Bays.

  1. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  2. A sLORETA study for gaze-independent BCI speller.

    PubMed

    Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming

    2017-07-01

    EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.

  3. Beyond Ball-and-Stick: Students' Processing of Novel STEM Visualizations

    ERIC Educational Resources Information Center

    Hinze, Scott R.; Rapp, David N.; Williamson, Vickie M.; Shultz, Mary Jane; Deslongchamps, Ghislain; Williamson, Kenneth C.

    2013-01-01

    Students are frequently presented with novel visualizations introducing scientific concepts and processes normally unobservable to the naked eye. Despite being unfamiliar, students are expected to understand and employ the visualizations to solve problems. Domain experts exhibit more competency than novices when using complex visualizations, but…

  4. The effect of visual and verbal modes of presentation on children's retention of images and words

    NASA Astrophysics Data System (ADS)

    Vasu, Ellen Storey; Howe, Ann C.

    This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.

  5. Computationally Efficient Clustering of Audio-Visual Meeting Data

    NASA Astrophysics Data System (ADS)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  6. Right hemispheric dominance and interhemispheric cooperation in gaze-triggered reflexive shift of attention.

    PubMed

    Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya

    2012-03-01

    The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.

  7. Infants' Visual Localization of Visual and Auditory Targets.

    ERIC Educational Resources Information Center

    Bechtold, A. Gordon; And Others

    This study is an investigation of 2-month-old infants' abilities to visually localize visual and auditory peripheral stimuli. Each subject (N=40) was presented with 50 trials; 25 of these visual and 25 auditory. The infant was placed in a semi-upright infant seat positioned 122 cm from the center speaker of an arc formed by five loudspeakers. At…

  8. Does Differential Visual Exploration Contribute to Visual Memory Impairments in 22Q11.2 Microdeletion Syndrome?

    ERIC Educational Resources Information Center

    Bostelmann, M.; Glaser, B.; Zaharia, A.; Eliez, S.; Schneider, M.

    2017-01-01

    Background: Chromosome 22q11.2 microdeletion syndrome (22q11.2DS) is a genetic syndrome characterised by a unique cognitive profile. Individuals with the syndrome present several non-verbal deficits, including visual memory impairments and atypical exploration of visual information. In this study, we seek to understand how visual attention may…

  9. Design of smart home sensor visualizations for older adults.

    PubMed

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-01-01

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date. Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  10. Design of smart home sensor visualizations for older adults.

    PubMed

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-07-24

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date.CONCLUSIONS: Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  11. Visual body size norms and the under‐detection of overweight and obesity

    PubMed Central

    Robinson, E.

    2017-01-01

    Summary Objectives The weight status of men with overweight and obesity tends to be visually underestimated, but visual recognition of female overweight and obesity has not been formally examined. The aims of the present studies were to test whether people can accurately recognize both male and female overweight and obesity and to examine a visual norm‐based explanation for why weight status is underestimated. Methods The present studies examine whether both male and female overweight and obesity are visually underestimated (Study 1), whether body size norms predict when underestimation of weight status occurs (Study 2) and whether visual exposure to heavier body weights adjusts visual body size norms and results in underestimation of weight status (Study 3). Results The weight status of men and women with overweight and obesity was consistently visually underestimated (Study 1). Body size norms predicted underestimation of weight status (Study 2) and in part explained why visual exposure to heavier body weights caused underestimation of overweight (Study 3). Conclusions The under‐detection of overweight and obesity may have been in part caused by exposure to larger body sizes resulting in an upwards shift in the range of body sizes that are perceived as being visually ‘normal’. PMID:29479462

  12. Creating a Visualization Powerwall

    NASA Technical Reports Server (NTRS)

    Miller, B. H.; Lambert, J.; Zamora, K.

    1996-01-01

    From Introduction: This paper presents the issues of constructing a Visualization Powerwall. For each hardware component, the requirements, options an our solution are presented. This is followed by a short description of each pilot project. In the summary, current obstacles and options discovered along the way are presented.

  13. The RSVP Project: Factors Related to Disengagement From Human Immunodeficiency Virus Care Among Persons in San Francisco.

    PubMed

    Scheer, Susan; Chen, Miao-Jung; Parisi, Maree Kay; Yoshida-Cervantes, Maya; Antunez, Erin; Delgado, Viva; Moss, Nicholas J; Buchacz, Kate

    2017-05-04

    In the United States, an estimated two-thirds of persons with human immunodeficiency virus (HIV) infection do not achieve viral suppression, including those who have never engaged in HIV care and others who do not stay engaged in care. Persons with an unsuppressed HIV viral load might experience poor clinical outcomes and transmit HIV. The goal of the Re-engaging Surveillance-identified Viremic Persons (RSVP) project in San Francisco, CA, was to use routine HIV surveillance databases to identify, contact, interview, and reengage in HIV care persons who appeared to be out of care because their last HIV viral load was unsuppressed. We aimed to interview participants about their HIV care and barriers to reengagement. Using routinely collected HIV surveillance data, we identified persons with HIV who were out of care (no HIV viral load and CD4 laboratory reports during the previous 9-15 months) and with their last plasma HIV RNA viral load >200 copies/mL. We interviewed the located persons, at baseline and 3 months later, about whether and why they disengaged from HIV care and the barriers they faced to care reengagement. We offered them assistance with reengaging in HIV care from the San Francisco Department of Public Health linkage and navigation program (LINCS). Of 282 persons selected, we interviewed 75 (26.6%). Of these, 67 (89%) reported current health insurance coverage, 59 (79%) had ever been prescribed and 45 (60%) were currently taking HIV medications, 59 (79%) had seen an HIV provider in the past year, and 34 (45%) had missed an HIV appointment in the past year. Reasons for not seeing a provider included feeling healthy, using alcohol or drugs, not having enough money or health insurance, and not wanting to take HIV medicines. Services needed to get to an HIV medical care appointment included transportation assistance, stable living situation or housing, sound mental health, and organizational help and reminders about appointments. A total of 52 (69%) accepted a referral to LINCS. Additionally, 64 (85%) of the persons interviewed completed a follow-up interview 3 months later and, of these, 62 (97%) had health insurance coverage and 47 (73%) reported having had an HIV-related care appointment since the baseline interview. Rather than being truly out of care, most participants reported intermittent HIV care, including recent HIV provider visits and health insurance coverage. Participants also frequently reported barriers to care and unmet needs. Health department assistance with HIV care reengagement was generally acceptable. Understanding why people previously in HIV care disengage from care and what might help them reengage is essential for optimizing HIV clinical and public health outcomes. ©Susan Scheer, Miao-Jung Chen, Maree Kay Parisi, Maya Yoshida-Cervantes, Erin Antunez, Viva Delgado, Nicholas J Moss, Kate Buchacz. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 04.05.2017.

  14. Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.

    PubMed

    Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi

    2013-01-01

    The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.

  15. Brain plasticity in the adult: modulation of function in amblyopia with rTMS.

    PubMed

    Thompson, Benjamin; Mansouri, Behzad; Koski, Lisa; Hess, Robert F

    2008-07-22

    Amblyopia is a cortically based visual disorder caused by disruption of vision during a critical early developmental period. It is often thought to be a largely intractable problem in adult patients because of a lack of neuronal plasticity after this critical period [1]; however, recent advances have suggested that plasticity is still present in the adult amblyopic visual cortex [2-6]. Here, we present data showing that repetitive transcranial magnetic stimulation (rTMS) of the visual cortex can temporarily improve contrast sensitivity in the amblyopic visual cortex. The results indicate continued plasticity of the amblyopic visual system in adulthood and open the way for a potential new therapeutic approach to the treatment of amblyopia.

  16. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  17. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus

    PubMed Central

    2017-01-01

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553

  18. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    PubMed

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.

  19. Astronomy for the Blind and Visually Impaired

    NASA Astrophysics Data System (ADS)

    Kraus, S.

    2016-12-01

    This article presents a number of ways of communicating astronomy topics, ranging from classical astronomy to modern astrophysics, to the blind and visually impaired. A major aim of these projects is to provide access which goes beyond the use of the tactile sense to improve knowledge transfer for blind and visually impaired students. The models presented here are especially suitable for young people of secondary school age.

  20. A STUDY OF THE EFFECTS OF PRESENTING INFORMATIVE SPEECHES WITH AND WITHOUT THE USE OF VISUAL AIDS TO VOLUNTARY ADULT AUDIENCES.

    ERIC Educational Resources Information Center

    BODENHAMER, SCHELL H.

    TO DETERMINE THE COMPARATIVE AMOUNT OF LEARNING THAT OCCURRED AND THE AUDIENCE REACTION TO MEETING EFFECTIVENESS, A 20-MINUTE INFORMATIVE SPEECH, "THE WEATHER," WAS PRESENTED WITH VISUAL AIDS TO 23 AND WITHOUT VISUAL AIDS TO 23 INFORMAL, VOLUNTARY, ADULT AUDIENCES. THE AUDIENCES WERE RANDOMLY DIVIDED, AND CONTROLS WERE USED TO ASSURE IDENTICAL…

  1. Implications of differences of echoic and iconic memory for the design of multimodal displays

    NASA Astrophysics Data System (ADS)

    Glaser, Daniel Shields

    It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.

  2. Interference, aging, and visuospatial working memory: the role of similarity.

    PubMed

    Rowe, Gillian; Hasher, Lynn; Turcotte, Josée

    2010-11-01

    Older adults' performance on working memory (WM) span tasks is known to be negatively affected by the buildup of proactive interference (PI) across trials. PI has been reduced in verbal tasks and performance increased by presenting distinctive items across trials. In addition, reversing the order of trial presentation (i.e., starting with the longest sets first) has been shown to reduce PI in both verbal and visuospatial WM span tasks. We considered whether making each trial visually distinct would improve older adults' visuospatial WM performance, and whether combining the 2 PI-reducing manipulations, distinct trials and reversed order of presentation, would prove additive, thus providing even greater benefit. Forty-eight healthy older adults (age range = 60-77 years) completed 1 of 3 versions of a computerized Corsi block test. For 2 versions of the task, trials were either all visually similar or all visually distinct, and were presented in the standard ascending format (shortest set size first). In the third version, visually distinct trials were presented in a reverse order of presentation (longest set size first). Span scores were reliably higher in the ascending version for visually distinct compared with visually similar trials, F(1, 30) = 4.96, p = .03, η² = .14. However, combining distinct trials and a descending format proved no more beneficial than administering the descending format alone. Our findings suggest that a more accurate measurement of the visuospatial WM span scores of older adults (and possibly neuropsychological patients) might be obtained by reducing within-test interference.

  3. Designing Instructional Visuals; Theory, Composition, Implementation.

    ERIC Educational Resources Information Center

    Linker, Jerry Mac

    The use of visual media in the classroom contributes to the improvement of teaching and learning. The purpose of this handbook is to present a practical discussion of the principles involved in designing visuals that teach. The author first describes the essentials of communication applied to instructional visuals. He then analyzes the physical…

  4. Auditory and Visual Capture during Focused Visual Attention

    ERIC Educational Resources Information Center

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets…

  5. Optimizing Visually-Assisted Listening Comprehension

    ERIC Educational Resources Information Center

    Kashani, Ahmad Sabouri; Sajjadi, Samad; Sohrabi, Mohammad Reza; Younespour, Shima

    2011-01-01

    The fact that visual aids such as pictures or graphs can lead to greater comprehension by language learners has been well established. Nonetheless, the order of presenting visuals to listeners is left unattended. This study examined listening comprehension from a strategy of introducing visual information, either prior to or during an audio…

  6. Visual Scripting.

    ERIC Educational Resources Information Center

    Halas, John

    Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…

  7. Newsmagazine Visuals and the 1988 Presidential Election.

    ERIC Educational Resources Information Center

    Moriarty, Sandra; Popovich, Mark

    A study examined newsmagazines' visual coverage of the 1988 election to determine if patterns of difference in the visual presentation of candidates existed. A content analysis examined all the visuals (photographs and illustrations) of the presidential and vice-presidential candidates printed in three national weekly newsmagazines--"U.S.…

  8. The Ecological Approach to Text Visualization.

    ERIC Educational Resources Information Center

    Wise, James A.

    1999-01-01

    Presents both theoretical and technical bases on which to build a "science of text visualization." The Spatial Paradigm for Information Retrieval and Exploration (SPIRE) text-visualization system, which images information from free-text documents as natural terrains, serves as an example of the "ecological approach" in its visual metaphor, its…

  9. Teaching the Visual Learner: The Use of Visual Summaries in Marketing Education

    ERIC Educational Resources Information Center

    Clarke, Irvine, III.; Flaherty, Theresa B.; Yankey, Michael

    2006-01-01

    Approximately 40% of college students are visual learners, preferring to be taught through pictures, diagrams, flow charts, timelines, films, and demonstrations. Yet marketing instruction remains heavily reliant on presenting content primarily through verbal cues such as written or spoken words. Without visual instruction, some students may be…

  10. A Bilateral Advantage for Storage in Visual Working Memory

    ERIC Educational Resources Information Center

    Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward

    2010-01-01

    Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…

  11. Using Visual Organizers to Enhance EFL Instruction

    ERIC Educational Resources Information Center

    Kang, Shumin

    2004-01-01

    Visual organizers are visual frameworks such as figures, diagrams, charts, etc. used to present structural knowledge spatially in a given area with the intention of enhancing comprehension and learning. Visual organizers are effective in terms of helping to elicit, explain, and communicate information because they can clarify complex concepts into…

  12. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  13. Suggested Activities to Use With Children Who Present Symptoms of Visual Perception Problems, Elementary Level.

    ERIC Educational Resources Information Center

    Washington County Public Schools, Washington, PA.

    Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…

  14. Physical Models that Provide Guidance in Visualization Deconstruction in an Inorganic Context

    ERIC Educational Resources Information Center

    Schiltz, Holly K.; Oliver-Hoyo, Maria T.

    2012-01-01

    Three physical model systems have been developed to help students deconstruct the visualization needed when learning symmetry and group theory. The systems provide students with physical and visual frames of reference to facilitate the complex visualization involved in symmetry concepts. The permanent reflection plane demonstration presents an…

  15. Visual Stress in Adults with and without Dyslexia

    ERIC Educational Resources Information Center

    Singleton, Chris; Trotter, Susannah

    2005-01-01

    The relationship between dyslexia and visual stress (sometimes known as Meares-Irlen syndrome) is uncertain. While some theorists have hypothesised an aetiological link between the two conditions, mediated by the magnocellular visual system, at the present time the predominant theories of dyslexia and visual stress see them as distinct, unrelated…

  16. Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study.

    PubMed

    Delvenne, Jean François; Seron, Xavier; Coyette, Françoise; Rossion, Bruno

    2004-01-01

    Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv für Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients.

  17. Numerosity underestimation with item similarity in dynamic visual display.

    PubMed

    Au, Ricky K C; Watanabe, Katsumi

    2013-01-01

    The estimation of numerosity of a large number of objects in a static visual display is possible even at short durations. Such coarse approximations of numerosity are distinct from subitizing, in which the number of objects can be reported with high precision when a small number of objects are presented simultaneously. The present study examined numerosity estimation of visual objects in dynamic displays and the effect of object similarity on numerosity estimation. In the basic paradigm (Experiment 1), two streams of dots were presented and observers were asked to indicate which of the two streams contained more dots. Streams consisting of dots that were identical in color were judged as containing fewer dots than streams where the dots were different colors. This underestimation effect for identical visual items disappeared when the presentation rate was slower (Experiment 1) or the visual display was static (Experiment 2). In Experiments 3 and 4, in addition to the numerosity judgment task, observers performed an attention-demanding task at fixation. Task difficulty influenced observers' precision in the numerosity judgment task, but the underestimation effect remained evident irrespective of task difficulty. These results suggest that identical or similar visual objects presented in succession might induce substitution among themselves, leading to an illusion that there are few items overall and that exploiting attentional resources does not eliminate the underestimation effect.

  18. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  19. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  20. Teaching Effectively with Visual Effect in an Image-Processing Class.

    ERIC Educational Resources Information Center

    Ng, G. S.

    1997-01-01

    Describes a course teaching the use of computers in emulating human visual capability and image processing and proposes an interactive presentation using multimedia technology to capture and sustain student attention. Describes the three phase presentation: introduction of image processing equipment, presentation of lecture material, and…

  1. From Visual Exploration to Storytelling and Back Again.

    PubMed

    Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M

    2016-06-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).

  2. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  3. From Visual Exploration to Storytelling and Back Again

    PubMed Central

    Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.

    2016-01-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091

  4. Lightness computation by the human visual system

    NASA Astrophysics Data System (ADS)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  5. Auditory and visual capture during focused visual attention.

    PubMed

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-10-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  6. Decoding visual object categories in early somatosensory cortex.

    PubMed

    Smith, Fraser W; Goodale, Melvyn A

    2015-04-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.

  7. Decoding Visual Object Categories in Early Somatosensory Cortex

    PubMed Central

    Smith, Fraser W.; Goodale, Melvyn A.

    2015-01-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136

  8. Sequence Diversity Diagram for comparative analysis of multiple sequence alignments.

    PubMed

    Sakai, Ryo; Aerts, Jan

    2014-01-01

    The sequence logo is a graphical representation of a set of aligned sequences, commonly used to depict conservation of amino acid or nucleotide sequences. Although it effectively communicates the amount of information present at every position, this visual representation falls short when the domain task is to compare between two or more sets of aligned sequences. We present a new visual presentation called a Sequence Diversity Diagram and validate our design choices with a case study. Our software was developed using the open-source program called Processing. It loads multiple sequence alignment FASTA files and a configuration file, which can be modified as needed to change the visualization. The redesigned figure improves on the visual comparison of two or more sets, and it additionally encodes information on sequential position conservation. In our case study of the adenylate kinase lid domain, the Sequence Diversity Diagram reveals unexpected patterns and new insights, for example the identification of subgroups within the protein subfamily. Our future work will integrate this visual encoding into interactive visualization tools to support higher level data exploration tasks.

  9. Cerebral Visual Impairment in Children: A Longitudinal Case Study of Functional Outcomes beyond the Visual Acuities

    ERIC Educational Resources Information Center

    Lam, Fook Chang; Lovett, Fiona; Dutton, Gordon N.

    2010-01-01

    Damage to the areas of the brain that are responsible for higher visual processing can lead to severe cerebral visual impairment (CVI). The prognosis for higher cognitive visual functions in children with CVI is not well described. We therefore present our six-year follow-up of a boy with CVI and highlight intervention approaches that have proved…

  10. Holography: Use in Training and Testing Drivers on the Road in Accident Avoidance.

    ERIC Educational Resources Information Center

    Frey, Allan H.; Frey, Donnalyn

    1979-01-01

    Defines holography, identifies visual factors in driving and the techniques used in on-road visual presentations, and presents the design and testing of a holographic system for driver training. (RAO)

  11. Forever young: Visual representations of gender and age in online dating sites for older adults.

    PubMed

    Gewirtz-Meydan, Ateret; Ayalon, Liat

    2017-06-13

    Online dating has become increasingly popular among older adults following broader social media adoption patterns. The current study examined the visual representations of people on 39 dating sites intended for the older population, with a particular focus on the visualization of the intersection between age and gender. All 39 dating sites for older adults were located through the Google search engine. Visual thematic analysis was performed with reference to general, non-age-related signs (e.g., facial expression, skin color), signs of aging (e.g., perceived age, wrinkles), relational features (e.g., proximity between individuals), and additional features such as number of people presented. The visual analysis in the present study revealed a clear intersection between ageism and sexism in the presentation of older adults. The majority of men and women were smiling and had a fair complexion, with light eye color and perceived age of younger than 60. Older women were presented as younger and wore more cosmetics as compared with older men. The present study stresses the social regulation of sexuality, as only heterosexual couples were presented. The narrow representation of older adults and the anti-aging messages portrayed in the pictures convey that love, intimacy, and sexual activity are for older adults who are "forever young."

  12. Stimulus modality and working memory performance in Greek children with reading disabilities: additional evidence for the pictorial superiority hypothesis.

    PubMed

    Constantinidou, Fofi; Evripidou, Christiana

    2012-01-01

    This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.

  13. Tracking Learners' Visual Attention during a Multimedia Presentation in a Real Classroom

    ERIC Educational Resources Information Center

    Yang, Fang-Ying; Chang, Chun-Yen; Chien, Wan-Ru; Chien, Yu-Ta; Tseng, Yuen-Hsien

    2013-01-01

    The purpose of the study was to investigate university learners' visual attention during a PowerPoint (PPT) presentation on the topic of "Dinosaurs" in a real classroom. The presentation, which lasted for about 12-15 min, consisted of 12 slides with various text and graphic formats. An instructor gave the presentation to 21 students…

  14. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  15. The interaction between the cognitive style of field dependence and visual presentations in color, monochrome, and line drawings

    NASA Astrophysics Data System (ADS)

    Myers, Robert Gardner

    1997-12-01

    The purpose of this study was to determine whether there is a correlation between the cognitive style of field dependence and the type of visual presentation format used in a computer-based tutorial (color; black and white: or line drawings) when subjects are asked to identify human tissue samples. Two hundred-four college students enrolled in human anatomy and physiology classes at Westmoreland County Community College participated. They were first administered the Group Embedded Figures Test (GEFT) and then were divided into three groups: field-independent (score, 15-18), field-neutral (score, 11-14), and field dependent (score, 0-10). Subjects were randomly assigned to one of the three treatment groups. Instruction was delivered by means of a computer-aided tutorial consisting of text and visuals of human tissue samples. The pretest and posttest consisted of 15 tissue samples, five from each treatment, that were imported into the HyperCardsp{TM} stack and were played using QuickTimesp{TM} movie extensions. A two-way analysis of covariance (ANCOVA) using pretest and posttest scores was used to investigate whether there is a relationship between field dependence and each of the three visual presentation formats. No significant interaction was found between individual subject's relative degree of field dependence and any of the different visual presentation formats used in the computer-aided tutorial module, F(4,194) = 1.78, p =.1335. There was a significant difference between the students' levels of field dependence in terms of their ability to identify human tissue samples, F(2,194) = 5.83, p =.0035. Field-independent subjects scored significantly higher (M = 10.59) on the posttest than subjects who were field-dependent (M = 9.04). There was also a significant difference among the various visual presentation formats, F(2,194) = 3.78, p =.0245. Subjects assigned to the group that received the color visual presentation format scored significantly higher (M = 10.38) on the posttest measure than did those assigned to the group that received the line drawing visual presentation format (8.99).

  16. Hemispheric specialization in quantification processes.

    PubMed

    Pasini, M; Tessari, A

    2001-01-01

    Three experiments were carried out to study hemispheric specialization for subitizing (the rapid enumeration of small patterns) and counting (the serial quantification process based on some formal principles). The experiments consist of numerosity identification of dot patterns presented in one visual field, with a tachistoscopic technique, or eye movements monitored through glasses, and comparison between centrally presented dot patterns and lateralized tachistoscopically presented digits. Our experiments show left visual field advantage in the identification and comparison tasks in the subitizing range, whereas right visual field advantage has been found in the comparison task for the counting range.

  17. The effect of visual representation style in problem-solving: a perspective from cognitive processes.

    PubMed

    Nyamsuren, Enkhbold; Taatgen, Niels A

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving.

  18. The Effect of Visual Representation Style in Problem-Solving: A Perspective from Cognitive Processes

    PubMed Central

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving. PMID:24260415

  19. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli

    2017-01-01

    This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.

  20. Anisotropies in the perceived spatial displacement of motion-defined contours: opposite biases in the upper-left and lower-right visual quadrants.

    PubMed

    Fan, Zhao; Harris, John

    2010-10-12

    In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.

Top