Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
ERIC Educational Resources Information Center
Soemer, Alexander; Schwan, Stephan
2016-01-01
In a series of experiments, we tested a recently proposed hypothesis stating that the degree of alignment between the form of a mental representation resulting from learning with a particular visualization format and the specific requirements of a learning task determines learning performance (task-appropriateness). Groups of participants were…
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
ERIC Educational Resources Information Center
Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.
2005-01-01
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…
Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela
2013-01-01
The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.
Beurskens, Rainer; Bock, Otmar
2013-12-01
Previous literature suggests that age-related deficits of dual-task walking are particularly pronounced with second tasks that require continuous visual processing. Here we evaluate whether the difficulty of the walking task matters as well. To this end, participants were asked to walk along a straight pathway of 20m length in four different walking conditions: (a) wide path and preferred pace; (b) narrow path and preferred pace, (c) wide path and fast pace, (d) obstacled wide path and preferred pace. Each condition was performed concurrently with a task requiring visual processing or fine motor control, and all tasks were also performed alone which allowed us to calculate the dual-task costs (DTC). Results showed that the age-related increase of DTC is substantially larger with the visually demanding than with the motor-demanding task, more so when walking on a narrow or obstacled path. We attribute these observations to the fact that visual scanning of the environment becomes more crucial when walking in difficult terrains: the higher visual demand of those conditions accentuates the age-related deficits in coordinating them with a visual non-walking task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
The role of early visual cortex in visual short-term memory and visual attention.
Offen, Shani; Schluppeck, Denis; Heeger, David J
2009-06-01
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Samaha, Jason; Postle, Bradley R
2017-11-29
Adaptive behaviour depends on the ability to introspect accurately about one's own performance. Whether this metacognitive ability is supported by the same mechanisms across different tasks is unclear. We investigated the relationship between metacognition of visual perception and metacognition of visual short-term memory (VSTM). Experiments 1 and 2 required subjects to estimate the perceived or remembered orientation of a grating stimulus and rate their confidence. We observed strong positive correlations between individual differences in metacognitive accuracy between the two tasks. This relationship was not accounted for by individual differences in task performance or average confidence, and was present across two different metrics of metacognition and in both experiments. A model-based analysis of data from a third experiment showed that a cross-domain correlation only emerged when both tasks shared the same task-relevant stimulus feature. That is, metacognition for perception and VSTM were correlated when both tasks required orientation judgements, but not when the perceptual task was switched to require contrast judgements. In contrast with previous results comparing perception and long-term memory, which have largely provided evidence for domain-specific metacognitive processes, the current findings suggest that metacognition of visual perception and VSTM is supported by a domain-general metacognitive architecture, but only when both domains share the same task-relevant stimulus feature. © 2017 The Author(s).
A localized model of spatial cognition in chemistry
NASA Astrophysics Data System (ADS)
Stieff, Mike
This dissertation challenges the assumption that spatial cognition, particularly visualization, is the key component to problem solving in chemistry. In contrast to this assumption, I posit a localized, or task-specific, model of spatial cognition in chemistry problem solving to locate the exact tasks in a traditional organic chemistry curriculum that require students to use visualization strategies to problem solve. Instead of assuming that visualization is required for most chemistry tasks simply because chemistry concerns invisible three-dimensional entities, I instead use the framework of the localized model to identify how students do and do not make use of visualization strategies on a wide variety of assessment tasks regardless of each task's explicit demand for spatial cognition. I establish the dimensions of the localized model with five studies. First, I designed two novel psychometrics to reveal how students selectively use visualization strategies to interpret and analyze molecular structures. The third study comprised a document analysis of the organic chemistry assessments that empirically determined only 12% of these tasks explicitly require visualization. The fourth study concerned a series of correlation analyses between measures of visuo-spatial ability and chemistry performance to clarify the impact of individual differences. Finally, I performed a series of micro-genetic analyses of student problem solving that confirmed the earlier findings and revealed students prefer to visualize molecules from alternative perspectives without using mental rotation. The results of each study reveal that occurrences of sophisticated spatial cognition are relatively infrequent in chemistry, despite instructors' ostensible emphasis on the visualization of three-dimensional structures. To the contrary, students eschew visualization strategies and instead rely on the use of molecular diagrams to scaffold spatial cognition. Visualization does play a key role, however, in problem solving on a select group of chemistry tasks that require students to translate molecular representations or fundamentally alter the morphology of a molecule. Ultimately, this dissertation calls into question the assumption that individual differences in visuo-spatial ability play a critical role in determining who succeeds in chemistry. The results of this work establish a foundation for defining the precise manner in which visualization tools can best support problem solving.
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Allon, Ayala S.; Balaban, Halely; Luria, Roy
2014-01-01
In three experiments we manipulated the resolution of novel complex objects in visual working memory (WM) by changing task demands. Previous studies that investigated the trade-off between quantity and resolution in visual WM yielded mixed results for simple familiar stimuli. We used the contralateral delay activity as an electrophysiological marker to directly track the deployment of visual WM resources while participants preformed a change-detection task. Across three experiments we presented the same novel complex items but changed the task demands. In Experiment 1 we induced a medium resolution task by using change trials in which a random polygon changed to a different type of polygon and replicated previous findings showing that novel complex objects are represented with higher resolution relative to simple familiar objects. In Experiment 2 we induced a low resolution task that required distinguishing between polygons and other types of stimulus categories, but we failed in finding a corresponding decrease in the resolution of the represented item. Finally, in Experiment 3 we induced a high resolution task that required discriminating between highly similar polygons with somewhat different contours. This time, we observed an increase in the item’s resolution. Our findings indicate that the resolution for novel complex objects can be increased but not decreased according to task demands, suggesting that minimal resolution is required in order to maintain these items in visual WM. These findings support studies claiming that capacity and resolution in visual WM reflect different mechanisms. PMID:24734026
Huang, Liqiang
2015-05-01
Basic visual features (e.g., color, orientation) are assumed to be processed in the same general way across different visual tasks. Here, a significant deviation from this assumption was predicted on the basis of the analysis of stimulus spatial structure, as characterized by the Boolean-map notion. If a task requires memorizing the orientations of a set of bars, then the map consisting of those bars can be readily used to hold the overall structure in memory and will thus be especially useful. If the task requires visual search for a target, then the map, which contains only an overall structure, will be of little use. Supporting these predictions, the present study demonstrated that in comparison to stimulus colors, bar orientations were processed more efficiently in change-detection tasks but less efficiently in visual search tasks (Cohen's d = 4.24). In addition to offering support for the role of the Boolean map in conscious access, the present work also throws doubts on the generality of processing visual features. © The Author(s) 2015.
Aging and feature search: the effect of search area.
Burton-Danner, K; Owsley, C; Jackson, G R
2001-01-01
The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.
Task relevance induces momentary changes in the functional visual field during reading.
Kaakinen, Johanna K; Hyönä, Jukka
2014-02-01
In the research reported here, we examined whether task demands can induce momentary tunnel vision during reading. More specifically, we examined whether the size of the functional visual field depends on task relevance. Forty participants read an expository text with a specific task in mind while their eye movements were recorded. A display-change paradigm with random-letter strings as preview masks was used to study the size of the functional visual field within sentences that contained task-relevant and task-irrelevant information. The results showed that orthographic parafoveal-on-foveal effects and preview benefits were observed for words within task-irrelevant but not task-relevant sentences. The results indicate that the size of the functional visual field is flexible and depends on the momentary processing demands of a reading task. The higher cognitive processing requirements experienced when reading task-relevant text rather than task-irrelevant text induce momentary tunnel vision, which narrows the functional visual field.
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
Visual perspective taking impairment in children with autistic spectrum disorder.
Hamilton, Antonia F de C; Brindley, Rachel; Frith, Uta
2009-10-01
Evidence from typical development and neuroimaging studies suggests that level 2 visual perspective taking - the knowledge that different people may see the same thing differently at the same time - is a mentalising task. Thus, we would expect children with autism, who fail typical mentalising tasks like false belief, to perform poorly on level 2 visual perspective taking as well. However, prior data on this issue are inconclusive. We re-examined this question, testing a group of 23 young autistic children, aged around 8years with a verbal mental age of around 4years and three groups of typical children (n=60) ranging in age from 4 to 8years on a level 2 visual perspective task and a closely matched mental rotation task. The results demonstrate that autistic children have difficulty with visual perspective taking compared to a task requiring mental rotation, relative to typical children. Furthermore, performance on the level 2 visual perspective taking task correlated with theory of mind performance. These findings resolve discrepancies in previous studies of visual perspective taking in autism, and demonstrate that level 2 visual perspective taking is a mentalising task.
Selective maintenance in visual working memory does not require sustained visual attention.
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M
2013-08-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved
Mirel, Barbara; Eichinger, Felix; Keller, Benjamin J; Kretzler, Matthias
2011-03-21
Bioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease? From our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow. Our results imply that visualizations should make available to scientific users “bundles of features†consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.
Validating a visual version of the metronome response task.
Laflamme, Patrick; Seli, Paul; Smilek, Daniel
2018-02-12
The metronome response task (MRT)-a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome-was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers' designs require a visual-based primary task.
Task set induces dynamic reallocation of resources in visual short-term memory.
Sheremata, Summer L; Shomstein, Sarah
2017-08-01
Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.
Performance drifts in two-finger cyclical force production tasks performed by one and two actors.
Hasanbarani, Fariba; Reschechtko, Sasha; Latash, Mark L
2018-03-01
We explored changes in the cyclical two-finger force performance task caused by turning visual feedback off performed either by the index and middle fingers of the dominant hand or by two index fingers of two persons. Based on an earlier study, we expected drifts in finger force amplitude and midpoint without a drift in relative phase. The subjects performed two rhythmical tasks at 1 Hz while paced by an auditory metronome. One of the tasks required cyclical changes in total force magnitude without changes in the sharing of the force between the two fingers. The other task required cyclical changes in the force sharing without changing total force magnitude. Subjects were provided with visual feedback, which showed total force magnitude and force sharing via cursor motion along the vertical and horizontal axes, respectively. Further, visual feedback was turned off, first on the variable that was not required to change and then on both variables. Turning visual feedback off led to a mean force drift toward lower magnitudes while force amplitude increased. There was a consistent drift in the relative phase in the one-hand task with the index finger leading the middle finger. No consistent relative phase drift was seen in the two-person tasks. The shape of the force cycle changed without visual feedback reflected in the lower similarity to a perfect cosine shape and in the higher time spent at lower force magnitudes. The data confirm findings of earlier studies regarding force amplitude and midpoint changes, but falsify predictions of an earlier proposed model with respect to the relative phase changes. We discuss factors that could contribute to the observed relative phase drift in the one-hand tasks including the leader-follower pattern generalized for two-effector tasks performed by one person.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-31
... interfere with the effective use of outside visual references for required pilot tasks. 2. To avoid... the requirements of the original approval. 3. The safety and performance of the pilot tasks associated....773 does not permit visual distortions and reflections that could interfere with the pilot's normal...
Multitasking During Degraded Speech Recognition in School-Age Children
Ward, Kristina M.; Brehm, Laurel
2017-01-01
Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition. PMID:28105890
Multitasking During Degraded Speech Recognition in School-Age Children.
Grieco-Calub, Tina M; Ward, Kristina M; Brehm, Laurel
2017-01-01
Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children's multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children's accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children's dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children's proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.
Task-Driven Evaluation of Aggregation in Time Series Visualization
Albers, Danielle; Correll, Michael; Gleicher, Michael
2014-01-01
Many visualization tasks require the viewer to make judgments about aggregate properties of data. Recent work has shown that viewers can perform such tasks effectively, for example to efficiently compare the maximums or means over ranges of data. However, this work also shows that such effectiveness depends on the designs of the displays. In this paper, we explore this relationship between aggregation task and visualization design to provide guidance on matching tasks with designs. We combine prior results from perceptual science and graphical perception to suggest a set of design variables that influence performance on various aggregate comparison tasks. We describe how choices in these variables can lead to designs that are matched to particular tasks. We use these variables to assess a set of eight different designs, predicting how they will support a set of six aggregate time series comparison tasks. A crowd-sourced evaluation confirms these predictions. These results not only provide evidence for how the specific visualizations support various tasks, but also suggest using the identified design variables as a tool for designing visualizations well suited for various types of tasks. PMID:25343147
Selective Maintenance in Visual Working Memory Does Not Require Sustained Visual Attention
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.
2012-01-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in VWM. Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. PMID:23067118
The Forest, the Trees, and the Leaves: Differences of Processing across Development
ERIC Educational Resources Information Center
Krakowski, Claire-Sara; Poirel, Nicolas; Vidal, Julie; Roëll, Margot; Pineau, Arlette; Borst, Grégoire; Houdé, Olivier
2016-01-01
To act and think, children and adults are continually required to ignore irrelevant visual information to focus on task-relevant items. As real-world visual information is organized into structures, we designed a feature visual search task containing 3-level hierarchical stimuli (i.e., local shapes that constituted intermediate shapes that formed…
ERIC Educational Resources Information Center
Keehner, Madeleine; Hegarty, Mary; Cohen, Cheryl; Khooshabeh, Peter; Montello, Daniel R.
2008-01-01
Three experiments examined the effects of interactive visualizations and spatial abilities on a task requiring participants to infer and draw cross sections of a three-dimensional (3D) object. The experiments manipulated whether participants could interactively control a virtual 3D visualization of the object while performing the task, and…
To speak or not to speak - A multiple resource perspective
NASA Technical Reports Server (NTRS)
Tsang, P. S.; Hartzell, E. J.; Rothschild, R. A.
1985-01-01
The desirability of employing speech response in a dynamic dual task situation was discussed from a multiple resource perspective. A secondary task technique was employed to examine the time-sharing performance of five dual tasks with various degrees of resource overlap according to the structure-specific resource model of Wickens (1980). The primary task was a visual/manual tracking task which required spatial processing. The secondary task was either another tracking task or a spatial transformation task with one of four input (visual or auditory) and output (manual or speech) configurations. The results show that the dual task performance was best when the primary tracking task was paired with the visual/speech transformation task. This finding was explained by an interaction of the stimulus-central processing-response compatibility of the transformation task and the degree of resource competition between the time-shared tasks. Implications on the utility of speech response were discussed.
The Crosstalk Hypothesis: Why Language Interferes with Driving
ERIC Educational Resources Information Center
Bergen, Benjamin; Medeiros-Ward, Nathan; Wheeler, Kathryn; Drews, Frank; Strayer, David
2013-01-01
Performing two cognitive tasks at the same time can degrade performance for either domain-general reasons (e.g., both tasks require attention) or domain-specific reasons (e.g., both tasks require visual working memory). We tested predictions of these two accounts of interference on the task of driving while using language, a naturally occurring…
Cognitive programs: software for attention's executive
Tsotsos, John K.; Kruijne, Wouter
2014-01-01
What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430
RAVE: Rapid Visualization Environment
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos
1994-01-01
Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.
ERIC Educational Resources Information Center
Ausburn, Floyd B.
A U.S. Air Force study was designed to develop instruction based on the supplantation theory, in which tasks are performed (supplanted) for individuals who are unable to perform them due to their cognitive style. The study examined the effects of linear and multiple imagery in presenting a task requiring visual comparison and location to…
ERIC Educational Resources Information Center
Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca
2010-01-01
Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…
Pilot Task Profiles, Human Factors, And Image Realism
NASA Astrophysics Data System (ADS)
McCormick, Dennis
1982-06-01
Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.
ERIC Educational Resources Information Center
Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.
2010-01-01
The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…
Developmental changes in the inferior frontal cortex for selecting semantic representations
Lee, Shu-Hui; Booth, James R.; Chen, Shiou-Yuan; Chou, Tai-Li
2012-01-01
Functional magnetic resonance imaging (fMRI) was used to examine the neural correlates of semantic judgments to Chinese words in a group of 10–15 year old Chinese children. Two semantic tasks were used: visual–visual versus visual–auditory presentation. The first word was visually presented (i.e. character) and the second word was either visually or auditorily presented, and the participant had to determine if these two words were related in meaning. Different from English, Chinese has many homophones in which each spoken word corresponds to many characters. The visual–auditory task, therefore, required greater engagement of cognitive control for the participants to select a semantically appropriate answer for the second homophonic word. Weaker association pairs produced greater activation in the mid-ventral region of left inferior frontal gyrus (BA 45) for both tasks. However, this effect was stronger for the visual–auditory task than for the visual–visual task and this difference was stronger for older compared to younger children. The findings suggest greater involvement of semantic selection mechanisms in the cross-modal task requiring the access of the appropriate meaning of homophonic spoken words, especially for older children. PMID:22337757
Visual scanning with or without spatial uncertainty and time-sharing performance
NASA Technical Reports Server (NTRS)
Liu, Yili; Wickens, Christopher D.
1989-01-01
An experiment is reported that examines the pattern of task interference between visual scanning as a sequential and selective attention process and other concurrent spatial or verbal processing tasks. A distinction is proposed between visual scanning with or without spatial uncertainty regarding the possible differential effects of these two types of scanning on interference with other concurrent processes. The experiment required the subject to perform a simulated primary tracking task, which was time-shared with a secondary spatial or verbal decision task. The relevant information that was needed to perform the decision tasks were displayed with or without spatial uncertainty. The experiment employed a 2 x 2 x 2 design with types of scanning (with or without spatial uncertainty), expected scanning distance (low/high), and codes of concurrent processing (spatial/verbal) as the three experimental factors. The results provide strong evidence that visual scanning as a spatial exploratory activity produces greater task interference with concurrent spatial tasks than with concurrent verbal tasks. Furthermore, spatial uncertainty in visual scanning is identified to be the crucial factor in producing this differential effect.
Foxe, John J; Murphy, Jeremy W; De Sanctis, Pierfilippo
2014-06-01
We assessed the role of alpha-band oscillatory activity during a task-switching design that required participants to switch between an auditory and a visual task, while task-relevant audiovisual inputs were simultaneously presented. Instructional cues informed participants which task to perform on a given trial and we assessed alpha-band power in the short 1.35-s period intervening between the cue and the task-imperative stimuli, on the premise that attentional biasing mechanisms would be deployed to resolve competition between the auditory and visual inputs. Prior work had shown that alpha-band activity was differentially deployed depending on the modality of the cued task. Here, we asked whether this activity would, in turn, be differentially deployed depending on whether participants had just made a switch of task or were being asked to simply repeat the task. It is well established that performance speed and accuracy are poorer on switch than on repeat trials. Here, however, the use of instructional cues completely mitigated these classic switch-costs. Measures of alpha-band synchronisation and desynchronisation showed that there was indeed greater and earlier differential deployment of alpha-band activity on switch vs. repeat trials. Contrary to our hypothesis, this differential effect was entirely due to changes in the amount of desynchronisation observed during switch and repeat trials of the visual task, with more desynchronisation over both posterior and frontal scalp regions during switch-visual trials. These data imply that particularly vigorous, and essentially fully effective, anticipatory biasing mechanisms resolved the competition between competing auditory and visual inputs when a rapid switch of task was required. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
MacLean, Mary H; Giesbrecht, Barry
2015-07-01
Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.
Task-set inertia and memory-consolidation bottleneck in dual tasks.
Koch, Iring; Rumiati, Raffaella I
2006-11-01
Three dual-task experiments examined the influence of processing a briefly presented visual object for deferred verbal report on performance in an unrelated auditory-manual reaction time (RT) task. RT was increased at short stimulus-onset asynchronies (SOAs) relative to long SOAs, showing that memory consolidation processes can produce a functional processing bottleneck in dual-task performance. In addition, the experiments manipulated the spatial compatibility of the orientation of the visual object and the side of the speeded manual response. This cross-task compatibility produced relative RT benefits only when the instruction for the visual task emphasized overlap at the level of response codes across the task sets (Experiment 1). However, once the effective task set was in place, it continued to produce cross-task compatibility effects even in single-task situations ("ignore" trials in Experiment 2) and when instructions for the visual task did not explicitly require spatial coding of object orientation (Experiment 3). Taken together, the data suggest a considerable degree of task-set inertia in dual-task performance, which is also reinforced by finding costs of switching task sequences (e.g., AC --> BC vs. BC --> BC) in Experiment 3.
Method matters: Systematic effects of testing procedure on visual working memory sensitivity
Makovski, Tal; Watson, Leah M.; Koutstaal, Wilma; Jiang, Yuhong V.
2010-01-01
Visual working memory (WM) is traditionally considered a robust form of visual representation that survives changes in object motion, observer's position, and other visual transients. This study presents data that are inconsistent with the traditional view. We show that memory sensitivity is dramatically influenced by small variations in the testing procedure, supporting the idea that representations in visual WM are susceptible to interference from testing. In this study, participants were shown an array of colors to remember. After a short retention interval, memory for one of the items was tested with either a same-different task or a 2-alternative-forced-choice (2AFC) task. Memory sensitivity was much lower in the 2AFC task than in the same-different task. This difference was found regardless of encoding similarity or whether visual WM required a fine memory resolution or a coarse resolution. The 2AFC disadvantage was reduced when participants were informed shortly before testing which item would be probed. The 2AFC disadvantage diminished in perceptual tasks and was not found in tasks probing visual long-term memory. These results support memory models that acknowledge the labile nature of visual WM, and have implications for the format of visual WM and its assessment. PMID:20854011
Empiric determination of corrected visual acuity standards for train crews.
Schwartz, Steven H; Swanson, William H
2005-08-01
Probably the most common visual standard for employment in the transportation industry is best-corrected, high-contrast visual acuity. Because such standards were often established absent empiric linkage to job performance, it is possible that a job applicant or employee who has visual acuity less than the standard may be able to satisfactorily perform the required job activities. For the transportation system that we examined, the train crew is required to inspect visually the length of the train before and during the time it leaves the station. The purpose of the inspection is to determine if an individual is in a hazardous position with respect to the train. In this article, we determine the extent to which high-contrast visual acuity can predict performance on a simulated task. Performance at discriminating hazardous from safe conditions, as depicted in projected photographic slides, was determined as a function of visual acuity. For different levels of visual acuity, which was varied through the use of optical defocus, a subject was required to label scenes as hazardous or safe. Task performance was highly correlated with visual acuity as measured under conditions normally used for vision screenings (high-illumination and high-contrast): as the acuity decreases, performance at discriminating hazardous from safe scenes worsens. This empirically based methodology can be used to establish a corrected high-contrast visual acuity standard for safety-sensitive work in transportation that is linked to the performance of a job-critical task.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Bowles, R. L.
1983-01-01
This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.
Reschechtko, Sasha; Hasanbarani, Fariba; Akulin, Vladimir M; Latash, Mark L
2017-05-14
The study explored unintentional force changes elicited by removing visual feedback during cyclical, two-finger isometric force production tasks. Subjects performed two types of tasks at 1Hz, paced by an auditory metronome. One - Force task - required cyclical changes in total force while maintaining the sharing, defined as relative contribution of a finger to total force. The other task - Share task - required cyclical changes in sharing while keeping total force unchanged. Each trial started under full visual feedback on both force and sharing; subsequently, feedback on the variable that was instructed to stay constant was frozen, and finally feedback on the other variable was also removed. In both tasks, turning off visual feedback on total force elicited a drop in the mid-point of the force cycle and an increase in the peak-to-peak force amplitude. Turning off visual feedback on sharing led to a drift of mean share toward 50:50 across both tasks. Without visual feedback there was consistent deviation of the two force time series from the in-phase pattern (typical of the Force task) and from the out-of-phase pattern (typical of the Share task). This finding is in contrast to most earlier studies that demonstrated only two stable patterns, in-phase and out-of-phase. We interpret the results as consequences of drifts of parameters in a dynamical system leading in particular to drifts in the referent finger coordinates toward their actual coordinates. The relative phase desynchronization is caused by the right-left differences in the hypothesized drift processes, consistent with the dynamic dominance hypothesis. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Reschechtko, Sasha; Hasanbarani, Fariba; Akulin, Vladimir M.; Latash, Mark L.
2017-01-01
The study explored unintentional force changes elicited by removing visual feedback during cyclical, two-finger isometric force production tasks. Subjects performed two types of tasks at 1 Hz, paced by an auditory metronome. One – Force task – required cyclical changes in total force while maintaining the sharing, defined as relative contribution of a finger to total force. The other task – Share task – required cyclical changes in sharing while keeping total force unchanged. Each trial started under full visual feedback on both force and sharing; subsequently, feedback on the variable that was instructed to stay constant was frozen, and finally feedback on the other variable was also removed. In both tasks, turning off visual feedback on total force elicited a drop in the mid-point of the force cycle and an increase in the peak-to-peak force amplitude. Turning off visual feedback on sharing led to a drift of mean share toward 50:50 across both tasks. Without visual feedback there was consistent deviation of the two force time series from the in-phase pattern (typical of the Force task) and from the out-of-phase pattern (typical of the Share task). This finding is in contrast to most earlier studies that demonstrated only two stable patterns, in-phase and out-of-phase. We interpret the results as consequences of drifts of parameters in a dynamical system leading in particular to drifts in the referent finger coordinates toward their actual coordinates. The relative phase desynchronization is caused by the right-left differences in the hypothesized drift processes, consistent with the dynamic dominance hypothesis. PMID:28344070
Distractor devaluation requires visual working memory.
Goolsby, Brian A; Shapiro, Kimron L; Raymond, Jane E
2009-02-01
Visual stimuli seen previously as distractors in a visual search task are subsequently evaluated more negatively than those seen as targets. An attentional inhibition account for this distractor-devaluation effect posits that associative links between attentional inhibition and to-be-ignored stimuli are established during search, stored, and then later reinstantiated, implying that distractor devaluation may require visual working memory (WM) resources. To assess this, we measured distractor devaluation with and without a concurrent visual WM load. Participants viewed a memory array, performed a simple search task, evaluated one of the search items (or a novel item), and then viewed a memory test array. Although distractor devaluation was observed with low (and no) WM load, it was absent when WM load was increased. This result supports the notions that active association of current attentional states with stimuli requires WM and that memory for these associations plays a role in affective response.
Visual Processing on Graphics Task: The Case of a Street Map
ERIC Educational Resources Information Center
Logan, Tracy; Lowrie, Tom
2013-01-01
Tracy Logan and Tom Lowrie argue that while little attention is given to visual imagery and spatial reasoning within the Australian Curriculum, a significant proportion of National Assessment Program--Literacy and Numeracy (NAPLAN) tasks require high levels of visuospatial reasoning. This article includes teaching ideas to promote visuospatial…
Visual field tunneling in aviators induced by memory demands.
Williams, L J
1995-04-01
Aviators are required rapidly and accurately to process enormous amounts of visual information located foveally and peripherally. The present study, expanding upon an earlier study (Williams, 1988), required young aviators to process within the framework of a single eye fixation a briefly displayed foveally presented memory load while simultaneously trying to identify common peripheral targets presented on the same display at locations up to 4.5 degrees of visual angle from the fixation point. This task, as well as a character classification task (Williams, 1985, 1988), has been shown to be very difficult for nonaviators: It results in a tendency toward tunnel vision. Limited preliminary measurements of peripheral accuracy suggested that aviators might be less susceptible than nonaviators to this visual tunneling. The present study demonstrated moderate susceptibility to cognitively induced tunneling in aviators when the foveal task was sufficiently difficult and reaction time was the principal dependent measure.
Stephan, Denise Nadine; Koch, Iring
2016-11-01
The present study was aimed at examining modality-specific influences in task switching. To this end, participants switched either between modality compatible tasks (auditory-vocal and visual-manual) or incompatible spatial discrimination tasks (auditory-manual and visual-vocal). In addition, auditory and visual stimuli were presented simultaneously (i.e., bimodally) in each trial, so that selective attention was required to process the task-relevant stimulus. The inclusion of bimodal stimuli enabled us to assess congruence effects as a converging measure of increased between-task interference. The tasks followed a pre-instructed sequence of double alternations (AABB), so that no explicit task cues were required. The results show that switching between two modality incompatible tasks increases both switch costs and congruence effects compared to switching between two modality compatible tasks. The finding of increased congruence effects in modality incompatible tasks supports our explanation in terms of ideomotor "backward" linkages between anticipated response effects and the stimuli that called for this response in the first place. According to this generalized ideomotor idea, the modality match between response effects and stimuli would prime selection of a response in the compatible modality. This priming would cause increased difficulties to ignore the competing stimulus and hence increases the congruence effect. Moreover, performance would be hindered when switching between modality incompatible tasks and facilitated when switching between modality compatible tasks.
Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex
Vaden, Ryan J.; Visscher, Kristina M.
2015-01-01
Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2014-01-01
Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070
Conceptual design study for a teleoperator visual system, phase 1
NASA Technical Reports Server (NTRS)
Adams, D.; Grant, C.; Johnson, C.; Meirick, R.; Polhemus, C.; Ray, A.; Rittenhouse, D.; Skidmore, R.
1972-01-01
Results are reported for work performed during the first phase of the conceptual design study for a teleoperator visual system. This phase consists of four tasks: General requirements, concept development, subsystem requirements and analysis, and concept evaluation.
AMERICAN STANDARD GUIDE FOR SCHOOL LIGHTING.
ERIC Educational Resources Information Center
Illuminating Engineering Society, New York, NY.
THIS IS A GUIDE FOR SCHOOL LIGHTING, DESIGNED FOR EDUCATORS AS WELL AS ARCHITECTS. IT MAKES USE OF RECENT RESEARCH, NOTABLY THE BLACKWELL REPORT ON EVALUATION OF VISUAL TASKS. THE GUIDE BEGINS WITH AN OVERVIEW OF CHANGING GOALS AND NEEDS OF SCHOOL LIGHTING, AND A TABULATION OF COMMON CLASSROOM VISUAL TASKS THAT REQUIRE VARIATIONS IN LIGHTING.…
ERIC Educational Resources Information Center
Ram-Tsur, Ronit; Faust, Miriam; Zivotofsky, Ari Z.
2008-01-01
The present study investigates the performance of persons with reading disabilities (PRD) on a variety of sequential visual-comparison tasks that have different working-memory requirements. In addition, mediating relationships between the sequential comparison process and attention and memory skills were looked for. Our findings suggest that PRD…
Earth orbital teleoperator visual system evaluation program
NASA Technical Reports Server (NTRS)
Shields, N. L., Jr.; Kirkpatrick, M., III; Frederick, P. N.; Malone, T. B.
1975-01-01
Empirical tests of range estimation accuracy and resolution, via television, under monoptic and steroptic viewing conditions are discussed. Test data are used to derive man machine interface requirements and make design decisions for an orbital remote manipulator system. Remote manipulator system visual tasks are given and the effects of system parameters of these tasks are evaluated.
NASA Technical Reports Server (NTRS)
Groce, J. L.; Boucek, G. P.
1988-01-01
This study is a continuation of an FAA effort to alleviate the growing problems of assimilating and managing the flow of data and flight related information in the air transport flight deck. The nature and extent of known pilot interface problems arising from new NAS data management programs were determined by a comparative timeline analysis of crew tasking requirements. A baseline of crew tasking requirements was established for conventional and advanced flight decks operating in the current NAS environment and then compared to the requirements for operation in a future NAS environment emphasizing Mode-S data link and TCAS. Results showed that a CDU-based pilot interface for Mode-S data link substantially increased crew visual activity as compared to the baseline. It was concluded that alternative means of crew interface should be available during high visual workload phases of flight. Results for TCAS implementation showed substantial visual and motor tasking increases, and that there was little available time between crew tasks during a TCAS encounter. It was concluded that additional research should be undertaken to address issues of ATC coordination and the relative benefit of high workload TCAS features.
Learning Enhances Sensory and Multiple Non-sensory Representations in Primary Visual Cortex
Poort, Jasper; Khan, Adil G.; Pachitariu, Marius; Nemri, Abdellatif; Orsolic, Ivana; Krupic, Julija; Bauza, Marius; Sahani, Maneesh; Keller, Georg B.; Mrsic-Flogel, Thomas D.; Hofer, Sonja B.
2015-01-01
Summary We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli. PMID:26051421
Coding the presence of visual objects in a recurrent neural network of visual cortex.
Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard
2007-01-01
Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.
Visual and skill effects on soccer passing performance, kinematics, and outcome estimations
Basevitch, Itay; Tenenbaum, Gershon; Land, William M.; Ward, Paul
2015-01-01
The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task. PMID:25784886
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Kaiser, Mary K.
2013-01-01
Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.
Attentional load inhibits vection.
Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji
2011-07-01
In this study, we examined the effects of cognitive task performance on the induction of vection. We hypothesized that, if vection requires attentional resources, performing cognitive tasks requiring attention should inhibit or weaken it. Experiment 1 tested the effects on vection of simultaneously performing a rapid serial visual presentation (RSVP) task. The results revealed that the RSVP task affected the subjective strength of vection. Experiment 2 tested the effects of a multiple-object-tracking (MOT) task on vection. Simultaneous performance of the MOT task decreased the duration and subjective strength of vection. Taken together, these findings suggest that vection induction requires attentional resources.
Sheremata, Summer L; Somers, David C; Shomstein, Sarah
2018-02-07
Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. While both require selection of information across the visual field, memory additionally requires the maintenance of information across time and distraction. VSTM recruits areas within human (male and female) dorsal and ventral parietal cortex that are also implicated in spatial selection; therefore, it is important to determine whether overlapping activation might reflect shared attentional demands. Here, identical stimuli and controlled sustained attention across both tasks were used to ask whether fMRI signal amplitude, functional connectivity, and contralateral visual field bias reflect memory-specific task demands. While attention and VSTM activated similar cortical areas, BOLD amplitude and functional connectivity in parietal cortex differentiated the two tasks. Relative to attention, VSTM increased BOLD amplitude in dorsal parietal cortex and decreased BOLD amplitude in the angular gyrus. Additionally, the tasks differentially modulated parietal functional connectivity. Contrasting VSTM and attention, intraparietal sulcus (IPS) 1-2 were more strongly connected with anterior frontoparietal areas and more weakly connected with posterior regions. This divergence between tasks demonstrates that parietal activation reflects memory-specific functions and consequently modulates functional connectivity across the cortex. In contrast, both tasks demonstrated hemispheric asymmetries for spatial processing, exhibiting a stronger contralateral visual field bias in the left versus the right hemisphere across tasks, suggesting that asymmetries are characteristic of a shared selection process in IPS. These results demonstrate that parietal activity and patterns of functional connectivity distinguish VSTM from more general attention processes, establishing a central role of the parietal cortex in maintaining visual information. SIGNIFICANCE STATEMENT Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. Cognitive mechanisms and neural activity underlying these tasks show a large degree of overlap. To examine whether activity within the posterior parietal cortex (PPC) reflects object maintenance across distraction or sustained attention per se, it is necessary to control for attentional demands inherent in VSTM tasks. We demonstrate that activity in PPC reflects VSTM demands even after controlling for attention; remembering items across distraction modulates relationships between parietal and other areas differently than during periods of sustained attention. Our study fills a gap in the literature by directly comparing and controlling for overlap between visual attention and VSTM tasks. Copyright © 2018 the authors 0270-6474/18/381511-09$15.00/0.
Harbluk, Joanne L; Noy, Y Ian; Trbovich, Patricia L; Eizenman, Moshe
2007-03-01
In this on-road experiment, drivers performed demanding cognitive tasks while driving in city traffic. All task interactions were carried out in hands-free mode so that the 21 drivers were not required to take their visual attention away from the road or to manually interact with a device inside the vehicle. Visual behavior and vehicle control were assessed while they drove an 8 km city route under three conditions: no additional task, easy cognitive task and difficult cognitive task. Changes in visual behavior were most apparent when performance between the No Task and Difficult Task conditions were compared. When looking outside of the vehicle, drivers spent more time looking centrally ahead and spent less time looking to the areas in the periphery. Drivers also reduced their visual monitoring of the instruments and mirrors, with some drivers abandoning these tasks entirely. When approaching and driving through intersections, drivers made fewer inspection glances to traffic lights compared to the No Task condition and their scanning of intersection areas to the right was also reduced. Vehicle control was also affected; during the most difficult cognitive tasks there were more occurrences of hard braking. Although hands-free designs for telematics devices are intended to reduce or eliminate the distraction arising from manual operation of these units, the potential for cognitive distraction associated with their use must also be considered and appropriately assessed. These changes are captured in measures of drivers' visual behavior.
Impaired visual recognition of biological motion in schizophrenia.
Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee
2005-09-15
Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.
Visual selective attention in amnestic mild cognitive impairment.
McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E
2014-11-01
Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The generation of criteria for selecting analytical tools for landscape management
Marilyn Duffey-Armstrong
1979-01-01
This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...
Task-related modulation of visual neglect in cancellation tasks
Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon
2008-01-01
Unilateral neglect involves deficits of spatial exploration and awareness that do not always affect a fixed portion of extrapersonal space, but may vary with current stimulation and possibly with task demands. Here, we assessed any ‘top-down’, task-related influences on visual neglect, with novel experimental variants of the cancellation test. Many different versions of the cancellation test are used clinically, and can differ in the extent of neglect revealed, though the exact factors determining this are not fully understood. Few cancellation studies have isolated the influence of top-down factors, as typically the stimuli are changed also when comparing different tests. Within each of three cancellation studies here, we manipulated task factors, while keeping visual displays identical across conditions to equate purely bottom-up factors. Our results show that top-down task-demands can significantly modulate neglect as revealed by cancellation on the same displays. Varying the target/non-target discrimination required for identical displays has a significant impact. Varying the judgement required can also have an impact on neglect even when all items are targets, so that non-targets no longer need filtering out. Requiring local versus global aspects of shape to be judged for the same displays also has a substantial impact, but the nature of discrimination required by the task still matters even when local/global level is held constant (e.g. for different colour discriminations on the same stimuli). Finally, an exploratory analysis of lesions among our neglect patients suggested that top-down task-related influences on neglect, as revealed by the new cancellation experiments here, might potentially depend on right superior temporal gyrus surviving the lesion. PMID:18790703
Task-related modulation of visual neglect in cancellation tasks.
Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon
2009-01-01
Unilateral neglect involves deficits of spatial exploration and awareness that do not always affect a fixed portion of extrapersonal space, but may vary with current stimulation and possibly with task demands. Here, we assessed any 'top-down', task-related influences on visual neglect, with novel experimental variants of the cancellation test. Many different versions of the cancellation test are used clinically, and can differ in the extent of neglect revealed, though the exact factors determining this are not fully understood. Few cancellation studies have isolated the influence of top-down factors, as typically the stimuli are changed also when comparing different tests. Within each of three cancellation studies here, we manipulated task factors, while keeping visual displays identical across conditions to equate purely bottom-up factors. Our results show that top-down task demands can significantly modulate neglect as revealed by cancellation on the same displays. Varying the target/non-target discrimination required for identical displays has a significant impact. Varying the judgement required can also have an impact on neglect even when all items are targets, so that non-targets no longer need filtering out. Requiring local versus global aspects of shape to be judged for the same displays also has a substantial impact, but the nature of discrimination required by the task still matters even when local/global level is held constant (e.g. for different colour discriminations on the same stimuli). Finally, an exploratory analysis of lesions among our neglect patients suggested that top-down task-related influences on neglect, as revealed by the new cancellation experiments here, might potentially depend on right superior temporal gyrus surviving the lesion.
Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception
ERIC Educational Resources Information Center
Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.
2016-01-01
Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…
Signals in inferotemporal and perirhinal cortex suggest an “untangling” of visual target information
Pagan, Marino; Urban, Luke S.; Wohl, Margot P.; Rust, Nicole C.
2013-01-01
Finding sought visual targets requires our brains to flexibly combine working memory information about what we are looking for with visual information about what we are looking at. To investigate the neural computations involved in finding visual targets, we recorded neural responses in inferotemporal (IT) and perirhinal (PRH) cortex as macaque monkeys performed a task that required them to find targets within sequences of distractors. We found similar amounts of total task-specific information in both areas, however, information about whether a target was in view was more accessible using a linear read-out (i.e. was more “untangled”) in PRH. Consistent with the flow of information from IT to PRH, we also found that task-relevant information arrived earlier in IT. PRH responses were well-described by a functional model in which “untangling” computations in PRH reformat input from IT by combining neurons with asymmetric tuning correlations for target matches and distractors. PMID:23792943
Visual Search Elicits the Electrophysiological Marker of Visual Working Memory
Emrich, Stephen M.; Al-Aidroos, Naseem; Pratt, Jay; Ferber, Susanne
2009-01-01
Background Although limited in capacity, visual working memory (VWM) plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA), which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. Methodology/Principal Findings The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. Conclusions/Significance We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors. PMID:19956663
The effect of compression and attention allocation on speech intelligibility. II
NASA Astrophysics Data System (ADS)
Choi, Sangsook; Carrell, Thomas
2004-05-01
Previous investigations of the effects of amplitude compression on measures of speech intelligibility have shown inconsistent results. Recently, a novel paradigm was used to investigate the possibility of more consistent findings with a measure of speech perception that is not based entirely on intelligibility (Choi and Carrell, 2003). That study exploited a dual-task paradigm using a pursuit rotor online visual-motor tracking task (Dlhopolsky, 2000) along with a word repetition task. Intensity-compressed words caused reduced performance on the tracking task as compared to uncompressed words when subjects engaged in a simultaneous word repetition task. This suggested an increased cognitive load when listeners processed compressed words. A stronger result might be obtained if a single resource (linguistic) is required rather than two (linguistic and visual-motor) resources. In the present experiment a visual lexical decision task and an auditory word repetition task were used. The visual stimuli for the lexical decision task were blurred and presented in a noise background. The compressed and uncompressed words for repetition were placed in speech-shaped noise. Participants with normal hearing and vision conducted word repetition and lexical decision tasks both independently and simultaneously. The pattern of results is discussed and compared to the previous study.
NASA Technical Reports Server (NTRS)
Sitterley, T. E.; Zaitzeff, L. P.; Berge, W. A.
1972-01-01
Flight control and procedural task skill degradation, and the effectiveness of retraining methods were evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Fifteen experienced pilots were trained and then tested after 4 months either without the benefits of practice or with static rehearsal, dynamic rehearsal or with dynamic warmup practice. Performance on both the flight control and procedure tasks degraded significantly after 4 months. The rehearsal methods effectively countered procedure task skill degradation, while dynamic rehearsal or a combination of static rehearsal and dynamic warmup practice was required for the flight control tasks. The quality of the retraining methods appeared to be primarily dependent on the efficiency of visual cue reinforcement.
Attention is required for maintenance of feature binding in visual working memory
Heider, Maike; Husain, Masud
2013-01-01
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory—but not necessarily other aspects of working memory. PMID:24266343
Attention is required for maintenance of feature binding in visual working memory.
Zokaei, Nahid; Heider, Maike; Husain, Masud
2014-01-01
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory-but not necessarily other aspects of working memory.
Maheux, Manon; Jolicœur, Pierre
2017-04-01
We examined the role of attention and visual working memory in the evaluation of the number of target stimuli as well as their relative spatial position using the N2pc and the SPCN. Participants performed two tasks: a simple counting task in which they had to determine if a visual display contained one or two coloured items among grey fillers and one in which they had to identify a specific relation between two coloured items. The same stimuli were used for both tasks. Each task was designed to permit an easier evaluation of either the same-coloured or differently-coloured stimuli. We predicted a greater involvement of attention and visual working memory for more difficult stimulus-task pairings. The results confirmed these predictions and suggest that visuospatial configurations that require more time to evaluate induce a greater (and presumably longer) involvement of attention and visual working memory. Copyright © 2017 Elsevier B.V. All rights reserved.
Components of working memory and visual selective attention.
Burnham, Bryan R; Sabia, Matthew; Langan, Catherine
2014-02-01
Load theory (Lavie, N., Hirst, A., De Fockert, J. W., & Viding, E. [2004]. Load theory of selective attention and cognitive control. Journal of Experimental Psychology: General, 133, 339-354.) proposes that control of attention depends on the amount and type of load that is imposed by current processing. Specifically, perceptual load should lead to efficient distractor rejection, whereas working memory load (dual-task coordination) should hinder distractor rejection. Studies support load theory's prediction that working memory load will lead to larger distractor effects; however, these studies used secondary tasks that required only verbal working memory and the central executive. The present study examined which other working memory components (visual, spatial, and phonological) influence visual selective attention. Subjects completed an attentional capture task alone (single-task) or while engaged in a working memory task (dual-task). Results showed that along with the central executive, visual and spatial working memory influenced selective attention, but phonological working memory did not. Specifically, attentional capture was larger when visual or spatial working memory was loaded, but phonological working memory load did not affect attentional capture. The results are consistent with load theory and suggest specific components of working memory influence visual selective attention. PsycINFO Database Record (c) 2014 APA, all rights reserved.
A taxonomy of visualization tasks for the analysis of biological pathway data.
Murray, Paul; McGee, Fintan; Forbes, Angus G
2017-02-15
Understanding complicated networks of interactions and chemical components is essential to solving contemporary problems in modern biology, especially in domains such as cancer and systems research. In these domains, biological pathway data is used to represent chains of interactions that occur within a given biological process. Visual representations can help researchers understand, interact with, and reason about these complex pathways in a number of ways. At the same time, these datasets offer unique challenges for visualization, due to their complexity and heterogeneity. Here, we present taxonomy of tasks that are regularly performed by researchers who work with biological pathway data. The generation of these tasks was done in conjunction with interviews with several domain experts in biology. These tasks require further classification than is provided by existing taxonomies. We also examine existing visualization techniques that support each task, and we discuss gaps in the existing visualization space revealed by our taxonomy. Our taxonomy is designed to support the development and design of future biological pathway visualization applications. We conclude by suggesting future research directions based on our taxonomy and motivated by the comments received by our domain experts.
Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception
Wilson, Amanda H.; Paré, Martin; Munhall, Kevin G.
2016-01-01
Purpose The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method We presented vowel–consonant–vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. Results In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. Conclusions The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze. PMID:27537379
1980-02-01
ADOAA82 342 OKLAHOMA UNIV NORMAN COLL OF EDUCATION F/B 5/9 TASK ANALYSIS SCHEMA BASED ON COGNITIVE STYLE AND SUPPLANFATION--ETC(U) FEB GO F B AUSBURN...separately- perceived fragments) 6. Tasks requiring use of a. Visual/haptic (pre- kinesthetic or tactile ference for kinesthetic stimuli stimuli; ability...to transform kinesthetic stimuli into visual images; ability to learn directly from tactile or kinesthet - ic impressions) b. Field independence/de
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
Temporal production and visuospatial processing.
Benuzzi, Francesca; Basso, Gianpaolo; Nichelli, Paolo
2005-12-01
Current models of prospective timing hypothesize that estimated duration is influenced either by the attentional load or by the short-term memory requirements of a concurrent nontemporal task. In the present study, we addressed this issue with four dual-task experiments. In Exp. 1, the effect of memory load on both reaction time and temporal production was proportional to the number of items of a visuospatial pattern to hold in memory. In Exps. 2, 3, and 4, a temporal production task was combined with two visual search tasks involving either pre-attentive or attentional processing. Visual tasks interfered with temporal production: produced intervals were lengthened proportionally to the display size. In contrast, reaction times increased with display size only when a serial, effortful search was required. It appears that memory and perceptual set size, rather than nonspecific attentional or short-term memory load, can influence prospective timing.
Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization
Marai, G. Elisabeta
2018-01-01
Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550
Accurate expectancies diminish perceptual distraction during visual search
Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry
2014-01-01
The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374
Conveying Clinical Reasoning Based on Visual Observation via Eye-Movement Modelling Examples
ERIC Educational Resources Information Center
Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nystrom, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit
2012-01-01
Complex perceptual tasks, like clinical reasoning based on visual observations of patients, require not only conceptual knowledge about diagnostic classes but also the skills to visually search for symptoms and interpret these observations. However, medical education so far has focused very little on how visual observation skills can be…
Thinking graphically: Connecting vision and cognition during graph comprehension.
Ratwani, Raj M; Trafton, J Gregory; Boehm-Davis, Deborah A
2008-03-01
Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive integration. During visual integration, pattern recognition processes are used to form visual clusters of information; these visual clusters are then used to reason about the graph during cognitive integration. In 3 experiments, the processes required to extract specific information and to integrate information were examined by collecting verbal protocol and eye movement data. Results supported the task analytic theories for specific information extraction and the processes of visual and cognitive integration for integrative questions. Further, the integrative processes scaled up as graph complexity increased, highlighting the importance of these processes for integration in more complex graphs. Finally, based on this framework, design principles to improve both visual and cognitive integration are described. PsycINFO Database Record (c) 2008 APA, all rights reserved
McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan
2018-04-01
To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.
Visual attention is required for multiple object tracking.
Tran, Annie; Hoffman, James E
2016-12-01
In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Octopus vulgaris uses visual information to determine the location of its arm.
Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael
2011-03-22
Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.
A Critical Review of the "Motor-Free Visual Perception Test-Fourth Edition" (MVPT-4)
ERIC Educational Resources Information Center
Brown, Ted; Peres, Lisa
2018-01-01
The "Motor-Free Visual Perception Test-fourth edition" (MVPT-4) is a revised version of the "Motor-Free Visual Perception Test-third edition." The MVPT-4 is used to assess the visual-perceptual ability of individuals aged 4.0 through 80+ years via a series of visual-perceptual tasks that do not require a motor response. Test…
Wang, Hao; Crewther, Sheila G.; Liang, Minglong; Laycock, Robin; Yu, Tao; Alexander, Bonnie; Crewther, David P.; Wang, Jian; Yin, Zhengqin
2017-01-01
Strabismic amblyopia is now acknowledged to be more than a simple loss of acuity and to involve alterations in visually driven attention, though whether this applies to both stimulus-driven and goal-directed attention has not been explored. Hence we investigated monocular threshold performance during a motion salience-driven attention task involving detection of a coherent dot motion target in one of four quadrants in adult controls and those with strabismic amblyopia. Psychophysical motion thresholds were impaired for the strabismic amblyopic eye, requiring longer inspection time and consequently slower target speed for detection compared to the fellow eye or control eyes. We compared fMRI activation and functional connectivity between four ROIs of the occipital-parieto-frontal visual attention network [primary visual cortex (V1), motion sensitive area V5, intraparietal sulcus (IPS) and frontal eye fields (FEF)], during a suprathreshold version of the motion-driven attention task, and also a simple goal-directed task, requiring voluntary saccades to targets randomly appearing along a horizontal line. Activation was compared when viewed monocularly by controls and the amblyopic and its fellow eye in strabismics. BOLD activation was weaker in IPS, FEF and V5 for both tasks when viewing through the amblyopic eye compared to viewing through the fellow eye or control participants' non-dominant eye. No difference in V1 activation was seen between the amblyopic and fellow eye, nor between the two eyes of control participants during the motion salience task, though V1 activation was significantly less through the amblyopic eye than through the fellow eye and control group non-dominant eye viewing during the voluntary saccade task. Functional correlations of ROIs within the attention network were impaired through the amblyopic eye during the motion salience task, whereas this was not the case during the voluntary saccade task. Specifically, FEF showed reduced functional connectivity with visual cortical nodes during the motion salience task through the amblyopic eye, despite suprathreshold detection performance. This suggests that the reduced ability of the amblyopic eye to activate the frontal components of the attention networks may help explain the aberrant control of visual attention and eye movements in amblyopes. PMID:28484381
Visual tasks and postural sway in children with and without autism spectrum disorders.
Chang, Chih-Hui; Wade, Michael G; Stoffregen, Thomas A; Hsu, Chin-Yu; Pan, Chien-Yu
2010-01-01
We investigated the influences of two different suprapostural visual tasks, visual searching and visual inspection, on the postural sway of children with and without autism spectrum disorder (ASD). Sixteen ASD children (age=8.75±1.34 years; height=130.34±11.03 cm) were recruited from a local support group. Individuals with an intellectual disability as a co-occurring condition and those with severe behavior problems that required formal intervention were excluded. Twenty-two sex- and age-matched typically developing (TD) children (age=8.93±1.39 years; height=133.47±8.21 cm) were recruited from a local public elementary school. Postural sway was recorded using a magnetic tracking system (Flock of Birds, Ascension Technologies, Inc., Burlington, VT). Results indicated that the ASD children exhibited greater sway than the TD children. Despite this difference, both TD and ASD children showed reduced sway during the search task, relative to sway during the inspection task. These findings replicate those of Stoffregen et al. (2000), Stoffregen, Giveans, et al. (2009), Stoffregen, Villard, et al. (2009) and Prado et al. (2007) and extend them to TD children as well as ASD children. Both TD and ASD children were able to functionally modulate postural sway to facilitate the performance of a task that required higher perceptual effort. Copyright © 2010 Elsevier Ltd. All rights reserved.
Dynamic visual noise reduces confidence in short-term memory for visual information.
Kemps, Eva; Andrade, Jackie
2012-05-01
Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.
Relation between brain activation and lexical performance.
Booth, James R; Burman, Douglas D; Meyer, Joel R; Gitelman, Darren R; Parrish, Todd B; Mesulam, M Marsel
2003-07-01
Functional magnetic resonance imaging (fMRI) was used to determine whether performance on lexical tasks was correlated with cerebral activation patterns. We found that such relationships did exist and that their anatomical distribution reflected the neurocognitive processing routes required by the task. Better performance on intramodal tasks (determining if visual words were spelled the same or if auditory words rhymed) was correlated with more activation in unimodal regions corresponding to the modality of sensory input, namely the fusiform gyrus (BA 37) for written words and the superior temporal gyrus (BA 22) for spoken words. Better performance in tasks requiring cross-modal conversions (determining if auditory words were spelled the same or if visual words rhymed), on the other hand, was correlated with more activation in posterior heteromodal regions, including the supramarginal gyrus (BA 40) and the angular gyrus (BA 39). Better performance in these cross-modal tasks was also correlated with greater activation in unimodal regions corresponding to the target modality of the conversion process (i.e., fusiform gyrus for auditory spelling and superior temporal gyrus for visual rhyming). In contrast, performance on the auditory spelling task was inversely correlated with activation in the superior temporal gyrus possibly reflecting a greater emphasis on the properties of the perceptual input rather than on the relevant transmodal conversions. Copyright 2003 Wiley-Liss, Inc.
Does scene context always facilitate retrieval of visual object representations?
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2011-04-01
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).
Boosting pitch encoding with audiovisual interactions in congenital amusia.
Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne
2015-01-01
The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Object localization, discrimination, and grasping with the optic nerve visual prosthesis.
Duret, Florence; Brelén, Måten E; Lambert, Valerie; Gérard, Benoît; Delbeke, Jean; Veraart, Claude
2006-01-01
This study involved a volunteer completely blind from retinis pigmentosa who had previously been implanted with an optic nerve visual prosthesis. The aim of this two-year study was to train the volunteer to localize a given object in nine different positions, to discriminate the object within a choice of six, and then to grasp it. In a closed-loop protocol including a head worn video camera, the nerve was stimulated whenever a part of the processed image of the object being scrutinized matched the center of an elicitable phosphene. The accessible visual field included 109 phosphenes in a 14 degrees x 41 degrees area. Results showed that training was required to succeed in the localization and discrimination tasks, but practically no training was required for grasping the object. The volunteer was able to successfully complete all tasks after training. The volunteer systematically performed several left-right and bottom-up scanning movements during the discrimination task. Discrimination strategies included stimulation phases and no-stimulation phases of roughly similar duration. This study provides a step towards the practical use of the optic nerve visual prosthesis in current daily life.
Nieuwenstein, Mark; Wyble, Brad
2014-06-01
While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Maurer, Urs; Blau, Vera C.; Yoncheva, Yuliya N.; McCandliss, Bruce D.
2010-01-01
Adults produce left-lateralized N170 responses to visual words relative to control stimuli, even within tasks that do not require active reading. This specialization begins in preschoolers as a right-lateralized N170 effect. We investigated whether this developmental shift reflects an early learning phenomenon, such as attaining visual familiarity with a script, by training adults in an artificial script and measuring N170 responses before and afterward. Training enhanced the N170 response, especially over the right hemisphere. This suggests N170 sensitivity to visual familiarity with a script before reading becomes sufficiently automatic to drive left-lateralized effects in a shallow encoding task. PMID:20614357
Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.
Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E
2013-08-01
Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Gaze shifts during dual-tasking stair descent.
Miyasike-daSilva, Veronica; McIlroy, William E
2016-11-01
To investigate the role of vision in stair locomotion, young adults descended a seven-step staircase during unrestricted walking (CONTROL), and while performing a concurrent visual reaction time (RT) task displayed on a monitor. The monitor was located at either 3.5 m (HIGH) or 0.5 m (LOW) above ground level at the end of the stairway, which either restricted (HIGH) or facilitated (LOW) the view of the stairs in the lower field of view as participants walked downstairs. Downward gaze shifts (recorded with an eye tracker) and gait speed were significantly reduced in HIGH and LOW compared with CONTROL. Gaze and locomotor behaviour were not different between HIGH and LOW. However, inter-individual variability increased in HIGH, in which participants combined different response characteristics including slower walking, handrail use, downward gaze, and/or increasing RTs. The fastest RTs occurred in the midsteps (non-transition steps). While gait and visual task performance were not statistically different prior to the top and bottom transition steps, gaze behaviour and RT were more variable prior to transition steps in HIGH. This study demonstrated that, in the presence of a visual task, people do not look down as often when walking downstairs and require minimum adjustments provided that the view of the stairs is available in the lower field of view. The middle of the stairs seems to require less from executive function, whereas visual attention appears a requirement to detect the last transition via gaze shifts or peripheral vision.
Audiovisual speech perception development at varying levels of perceptual processing
Lalonde, Kaylah; Holt, Rachael Frush
2016-01-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318
Audiovisual speech perception development at varying levels of perceptual processing.
Lalonde, Kaylah; Holt, Rachael Frush
2016-04-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Wiyor, Hanniebey D.; Ntuen, Celestine A.
2013-01-01
The purpose of this study was to investigate the effect of stereoscopic display alignment errors on visual fatigue and prefrontal cortical tissue hemodynamic responses. We collected hemodynamic data and perceptual ratings of visual fatigue while participants performed visual display tasks on 8 ft × 6 ft NEC LT silver screen with NEC LT 245 DLP projectors. There was statistical significant difference between subjective measures of visual fatigue before air traffic control task (BATC) and after air traffic control task (ATC 3), (P < 0.05). Statistical significance was observed between left dorsolateral prefrontal cortex oxygenated hemoglobin (l DLPFC-HbO2), left dorsolateral prefrontal cortex deoxygenated hemoglobin (l DLPFC-Hbb), and right dorsolateral prefrontal cortex deoxygenated hemoglobin (r DLPFC-Hbb) on stereoscopic alignment errors (P < 0.05). Thus, cortical tissue oxygenation requirement in the left hemisphere indicates that the effect of visual fatigue is more pronounced in the left dorsolateral prefrontal cortex. PMID:27006917
Closed head injury and perceptual processing in dual-task situations.
Hein, G; Schubert, T; von Cramon, D Y
2005-01-01
Using a classical psychological refractory period (PRP) paradigm we investigated whether increased interference between dual-task input processes is one possible source of dual-task deficits in patients with closed-head injury (CHI). Patients and age-matched controls were asked to give speeded motor reactions to an auditory and a visual stimulus. The perceptual difficulty of the visual stimulus was manipulated by varying its intensity. The results of Experiment 1 showed that CHI patients suffer from increased interference between dual-task input processes, which is related to the salience of the visual stimulus. A second experiment indicated that this input interference may be specific to brain damage following CHI. It is not evident in other groups of neurological patients like Parkinson's disease patients. We conclude that the non-interfering processing of input stages in dual-tasks requires cognitive control. A decline in the control of input processes should be considered as one source of dual-task deficits in CHI patients.
Improving Visual Threat Detection: Research to Validate the Threat Detection Skills Trainer
2013-08-01
potential threats present in this scene and explain the meaning and implications of these threats. You have two minutes to write a response...could be due to the nature of the tasks or to fatigue. Requiring Soldiers to write answers on multiple trials, and across similar tasks, might have...tasks will likely be significantly different from those experienced in the trainer. This would remove the writing requirement over multiple trials
Working memory capacity and the scope and control of attention.
Shipstead, Zach; Harrison, Tyler L; Engle, Randall W
2015-08-01
Complex span and visual arrays are two common measures of working memory capacity that are respectively treated as measures of attention control and storage capacity. A recent analysis of these tasks concluded that (1) complex span performance has a relatively stronger relationship to fluid intelligence and (2) this is due to the requirement that people engage control processes while performing this task. The present study examines the validity of these conclusions by examining two large data sets that include a more diverse set of visual arrays tasks and several measures of attention control. We conclude that complex span and visual arrays account for similar amounts of variance in fluid intelligence. The disparity relative to the earlier analysis is attributed to the present study involving a more complete measure of the latent ability underlying the performance of visual arrays. Moreover, we find that both types of working memory task have strong relationships to attention control. This indicates that the ability to engage attention in a controlled manner is a critical aspect of working memory capacity, regardless of the type of task that is used to measure this construct.
McMenamin, Brenton W.; Marsolek, Chad J.; Morseth, Brianna K.; Speer, MacKenzie F.; Burton, Philip C.; Burgund, E. Darcy
2016-01-01
Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities – an abstract category (AC) subsystem that operates effectively in the left hemisphere, and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad fronto-parietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue/probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations. PMID:26883940
McMenamin, Brenton W; Marsolek, Chad J; Morseth, Brianna K; Speer, MacKenzie F; Burton, Philip C; Burgund, E Darcy
2016-06-01
Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities-an abstract category (AC) subsystem that operates effectively in the left hemisphere and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad frontoparietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue-probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations.
Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna
2015-11-01
Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.
Effects of regular aerobic exercise on visual perceptual learning.
Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas
2017-12-02
This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effects of age and auditory and visual dual tasks on closed-road driving performance.
Chaparro, Alex; Wood, Joanne M; Carberry, Trent
2005-08-01
This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate that multitasking had a significant detrimental impact on driving performance and that cognitive aging was the best predictor of the declines seen in driving performance under dual task conditions. These results have implications for use of mobile phones or in-vehicle navigational devices while driving, especially for older adults.
Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh
2004-11-01
Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.
Factors influencing hand/eye synchronicity in the computer age.
Grant, A H
1992-09-01
In using a computer, the relation of vision to hand/finger actuated keyboard usage in performing fine motor-coordinated functions is influenced by the physical location, size, and collective placement of the keys. Traditional nonprehensile flat/rectangular keyboard applications usually require a high and nearly constant level of visual attention. Biometrically shaped keyboards would allow for prehensile hand-posturing, thus affording better tactile familiarity with the keys, requiring less intense and less constant level of visual attention to the task, and providing a greater measure of freedom from having to visualize the key(s). Workpace and related physiological changes, aging, onset of monocularization (intermittent lapsing of binocularity for near vision) that accompanies presbyopia, tool colors, and background contrast are factors affecting constancy of visual attention to task performance. Capitas extension, excessive excyclotorsion, and repetitive strain injuries (such as carpal tunnel syndrome) are common and debilitating concomitants to computer usage. These problems can be remedied by improved keyboard design. The salutary role of mnemonics in minimizing visual dependency is discussed.
Telgen, Sebastian; Parvin, Darius; Diedrichsen, Jörn
2014-10-08
Motor learning tasks are often classified into adaptation tasks, which involve the recalibration of an existing control policy (the mapping that determines both feedforward and feedback commands), and skill-learning tasks, requiring the acquisition of new control policies. We show here that this distinction also applies to two different visuomotor transformations during reaching in humans: Mirror-reversal (left-right reversal over a mid-sagittal axis) of visual feedback versus rotation of visual feedback around the movement origin. During mirror-reversal learning, correct movement initiation (feedforward commands) and online corrections (feedback responses) were only generated at longer latencies. The earliest responses were directed into a nonmirrored direction, even after two training sessions. In contrast, for visual rotation learning, no dependency of directional error on reaction time emerged, and fast feedback responses to visual displacements of the cursor were immediately adapted. These results suggest that the motor system acquires a new control policy for mirror reversal, which initially requires extra processing time, while it recalibrates an existing control policy for visual rotations, exploiting established fast computational processes. Importantly, memory for visual rotation decayed between sessions, whereas memory for mirror reversals showed offline gains, leading to better performance at the beginning of the second session than in the end of the first. With shifts in time-accuracy tradeoff and offline gains, mirror-reversal learning shares common features with other skill-learning tasks. We suggest that different neuronal mechanisms underlie the recalibration of an existing versus acquisition of a new control policy and that offline gains between sessions are a characteristic of latter. Copyright © 2014 the authors 0270-6474/14/3413768-12$15.00/0.
Leikin, Mark; Waisman, Ilana; Shaul, Shelley; Leikin, Roza
2014-03-01
This paper presents a small part of a larger interdisciplinary study that investigates brain activity (using event related potential methodology) of male adolescents when solving mathematical problems of different types. The study design links mathematics education research with neurocognitive studies. In this paper we performed a comparative analysis of brain activity associated with the translation from visual to symbolic representations of mathematical objects in algebra and geometry. Algebraic tasks require translation from graphical to symbolic representation of a function, whereas tasks in geometry require translation from a drawing of a geometric figure to a symbolic representation of its property. The findings demonstrate that electrical activity associated with the performance of geometrical tasks is stronger than that associated with solving algebraic tasks. Additionally, we found different scalp topography of the brain activity associated with algebraic and geometric tasks. Based on these results, we argue that problem solving in algebra and geometry is associated with different patterns of brain activity.
ERIC Educational Resources Information Center
Bogon, Johanna; Finke, Kathrin; Schulte-Körne, Gerd; Müller, Hermann J.; Schneider, Werner X.; Stenneken, Prisca
2014-01-01
People with developmental dyslexia (DD) have been shown to be impaired in tasks that require the processing of multiple visual elements in parallel. It has been suggested that this deficit originates from disturbed visual attentional functions. The parameter-based assessment of visual attention based on Bundesen's (1990) theory of visual…
Effects of total sleep deprivation on divided attention performance
2017-01-01
Dividing attention across two tasks performed simultaneously usually results in impaired performance on one or both tasks. Most studies have found no difference in the dual-task cost of dividing attention in rested and sleep-deprived states. We hypothesized that, for a divided attention task that is highly cognitively-demanding, performance would show greater impairment during exposure to sleep deprivation. A group of 30 healthy males aged 21–30 years was exposed to 40 h of continuous wakefulness in a laboratory setting. Every 2 h, subjects completed a divided attention task comprising 3 blocks in which an auditory Go/No-Go task was 1) performed alone (single task); 2) performed simultaneously with a visual Go/No-Go task (dual task); and 3) performed simultaneously with both a visual Go/No-Go task and a visually-guided motor tracking task (triple task). Performance on all tasks showed substantial deterioration during exposure to sleep deprivation. A significant interaction was observed between task load and time since wake on auditory Go/No-Go task performance, with greater impairment in response times and accuracy during extended wakefulness. Our results suggest that the ability to divide attention between multiple tasks is impaired during exposure to sleep deprivation. These findings have potential implications for occupations that require multi-tasking combined with long work hours and exposure to sleep loss. PMID:29166387
Effects of total sleep deprivation on divided attention performance.
Chua, Eric Chern-Pin; Fang, Eric; Gooley, Joshua J
2017-01-01
Dividing attention across two tasks performed simultaneously usually results in impaired performance on one or both tasks. Most studies have found no difference in the dual-task cost of dividing attention in rested and sleep-deprived states. We hypothesized that, for a divided attention task that is highly cognitively-demanding, performance would show greater impairment during exposure to sleep deprivation. A group of 30 healthy males aged 21-30 years was exposed to 40 h of continuous wakefulness in a laboratory setting. Every 2 h, subjects completed a divided attention task comprising 3 blocks in which an auditory Go/No-Go task was 1) performed alone (single task); 2) performed simultaneously with a visual Go/No-Go task (dual task); and 3) performed simultaneously with both a visual Go/No-Go task and a visually-guided motor tracking task (triple task). Performance on all tasks showed substantial deterioration during exposure to sleep deprivation. A significant interaction was observed between task load and time since wake on auditory Go/No-Go task performance, with greater impairment in response times and accuracy during extended wakefulness. Our results suggest that the ability to divide attention between multiple tasks is impaired during exposure to sleep deprivation. These findings have potential implications for occupations that require multi-tasking combined with long work hours and exposure to sleep loss.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of a computerized visual search test.
Reid, Denise; Babani, Harsha; Jon, Eugenia
2009-09-01
Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed to provide an alternative to existing cancellation tests. Data from two pilot studies will be reported that examined some aspects of the test's validity. To date, our assessment of the test shows that it discriminates between healthy and head-injured persons. More research and development work is required to examine task performance changes in relation to task complexity. It is suggested that the conceptual design for the test is worthy of further investigation.
Behavioral indicators of pilot workload
NASA Technical Reports Server (NTRS)
Galanter, E.; Hochberg, J.
1983-01-01
Using a technique that requires a subject to consult an imagined or remembered spatial array while performing a visual task, a reliable reduction in the number of directed eye movements that are available for the acquisition of visual information is shown.
Division of attention as a function of the number of steps, visual shifts, and memory load
NASA Technical Reports Server (NTRS)
Chechile, R. A.; Butler, K.; Gutowski, W.; Palmer, E. A.
1986-01-01
The effects on divided attention of visual shifts and long-term memory retrieval during a monitoring task are considered. A concurrent vigilance task was standardized under all experimental conditions. The results show that subjects can perform nearly perfectly on all of the time-shared tasks if long-term memory retrieval is not required for monitoring. With the requirement of memory retrieval, however, there was a large decrease in accuracy for all of the time-shared activities. It was concluded that the attentional demand of longterm memory retrieval is appreciable (even for a well-learned motor sequence), and thus memory retrieval results in a sizable reduction in the capability of subjects to divide their attention. A selected bibliography on the divided attention literature is provided.
"Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search
ERIC Educational Resources Information Center
Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.
2013-01-01
Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…
Task modulates functional connectivity networks in free viewing behavior.
Seidkhani, Hossein; Nikolaev, Andrey R; Meghanathan, Radha Nila; Pezeshk, Hamid; Masoudi-Nejad, Ali; van Leeuwen, Cees
2017-10-01
In free visual exploration, eye-movement is immediately followed by dynamic reconfiguration of brain functional connectivity. We studied the task-dependency of this process in a combined visual search-change detection experiment. Participants viewed two (nearly) same displays in succession. First time they had to find and remember multiple targets among distractors, so the ongoing task involved memory encoding. Second time they had to determine if a target had changed in orientation, so the ongoing task involved memory retrieval. From multichannel EEG recorded during 200 ms intervals time-locked to fixation onsets, we estimated the functional connectivity using a weighted phase lag index at the frequencies of theta, alpha, and beta bands, and derived global and local measures of the functional connectivity graphs. We found differences between both memory task conditions for several network measures, such as mean path length, radius, diameter, closeness and eccentricity, mainly in the alpha band. Both the local and the global measures indicated that encoding involved a more segregated mode of operation than retrieval. These differences arose immediately after fixation onset and persisted for the entire duration of the lambda complex, an evoked potential commonly associated with early visual perception. We concluded that encoding and retrieval differentially shape network configurations involved in early visual perception, affecting the way the visual input is processed at each fixation. These findings demonstrate that task requirements dynamically control the functional connectivity networks involved in early visual perception. Copyright © 2017 Elsevier Inc. All rights reserved.
Reimer, Christina B; Schubert, Torsten
2017-09-15
Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.
The sensory strength of voluntary visual imagery predicts visual working memory capacity.
Keogh, Rebecca; Pearson, Joel
2014-10-09
How much we can actively hold in mind is severely limited and differs greatly from one person to the next. Why some individuals have greater capacities than others is largely unknown. Here, we investigated why such large variations in visual working memory (VWM) capacity might occur, by examining the relationship between visual working memory and visual mental imagery. To assess visual working memory capacity participants were required to remember the orientation of a number of Gabor patches and make subsequent judgments about relative changes in orientation. The sensory strength of voluntary imagery was measured using a previously documented binocular rivalry paradigm. Participants with greater imagery strength also had greater visual working memory capacity. However, they were no better on a verbal number working memory task. Introducing a uniform luminous background during the retention interval of the visual working memory task reduced memory capacity, but only for those with strong imagery. Likewise, for the good imagers increasing background luminance during imagery generation reduced its effect on subsequent binocular rivalry. Luminance increases did not affect any of the subgroups on the verbal number working memory task. Together, these results suggest that luminance was disrupting sensory mechanisms common to both visual working memory and imagery, and not a general working memory system. The disruptive selectivity of background luminance suggests that good imagers, unlike moderate or poor imagers, may use imagery as a mnemonic strategy to perform the visual working memory task. © 2014 ARVO.
Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.
Chen, Jian; Jia, Bingxi; Zhang, Kaixiang
2017-11-01
In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.
Visual skills in airport-security screening.
McCarley, Jason S; Kramer, Arthur F; Wickens, Christopher D; Vidoni, Eric D; Boot, Walter R
2004-05-01
An experiment examined visual performance in a simulated luggage-screening task. Observers participated in five sessions of a task requiring them to search for knives hidden in x-ray images of cluttered bags. Sensitivity and response times improved reliably as a result of practice. Eye movement data revealed that sensitivity increases were produced entirely by changes in observers' ability to recognize target objects, and not by changes in the effectiveness of visual scanning. Moreover, recognition skills were in part stimulus-specific, such that performance was degraded by the introduction of unfamiliar target objects. Implications for screener training are discussed.
Botly, Leigh C P; De Rosa, Eve
2012-10-01
The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.
The Role of Direct and Visual Force Feedback in Suturing Using a 7-DOF Dual-Arm Teleoperated System.
Talasaz, Ali; Trejos, Ana Luisa; Patel, Rajni V
2017-01-01
The lack of haptic feedback in robotics-assisted surgery can result in tissue damage or accidental tool-tissue hits. This paper focuses on exploring the effect of haptic feedback via direct force reflection and visual presentation of force magnitudes on performance during suturing in robotics-assisted minimally invasive surgery (RAMIS). For this purpose, a haptics-enabled dual-arm master-slave teleoperation system capable of measuring tool-tissue interaction forces in all seven Degrees-of-Freedom (DOFs) was used. Two suturing tasks, tissue puncturing and knot-tightening, were chosen to assess user skills when suturing on phantom tissue. Sixteen subjects participated in the trials and their performance was evaluated from various points of view: force consistency, number of accidental hits with tissue, amount of tissue damage, quality of the suture knot, and the time required to accomplish the task. According to the results, visual force feedback was not very useful during the tissue puncturing task as different users needed different amounts of force depending on the penetration of the needle into the tissue. Direct force feedback, however, was more useful for this task to apply less force and to minimize the amount of damage to the tissue. Statistical results also reveal that both visual and direct force feedback were required for effective knot tightening: direct force feedback could reduce the number of accidental hits with the tissue and also the amount of tissue damage, while visual force feedback could help to securely tighten the suture knots and maintain force consistency among different trials/users. These results provide evidence of the importance of 7-DOF force reflection when performing complex tasks in a RAMIS setting.
Global processing in amblyopia: a review
Hamm, Lisa M.; Black, Joanna; Dai, Shuan; Thompson, Benjamin
2014-01-01
Amblyopia is a neurodevelopmental disorder of the visual system that is associated with disrupted binocular vision during early childhood. There is evidence that the effects of amblyopia extend beyond the primary visual cortex to regions of the dorsal and ventral extra-striate visual cortex involved in visual integration. Here, we review the current literature on global processing deficits in observers with either strabismic, anisometropic, or deprivation amblyopia. A range of global processing tasks have been used to investigate the extent of the cortical deficit in amblyopia including: global motion perception, global form perception, face perception, and biological motion. These tasks appear to be differentially affected by amblyopia. In general, observers with unilateral amblyopia appear to show deficits for local spatial processing and global tasks that require the segregation of signal from noise. In bilateral cases, the global processing deficits are exaggerated, and appear to extend to specialized perceptual systems such as those involved in face processing. PMID:24987383
Brisson, Benoit; Leblanc, Emilie; Jolicoeur, Pierre
2009-02-01
It has recently been demonstrated that a lateralized distractor that matches the individual's top-down control settings elicits an N2pc wave, an electrophysiological index of the focus of visual-spatial attention, indicating that contingent capture has a visual-spatial locus. Here, we investigated whether contingent capture required capacity-limited central resources by incorporating a contingent capture task as the second task of a psychological refractory period (PRP) dual-task paradigm. The N2pc was used to monitor where observers were attending while they performed concurrent central processing known to cause the PRP effect. The N2pc elicited by the lateralized distractor that matched the top-down control settings was attenuated in high concurrent central load conditions, indicating that although involuntary, the deployment of visual-spatial attention occurring during contingent capture depends on capacity-limited central resources.
Attentive Tracking Disrupts Feature Binding in Visual Working Memory
Fougnie, Daryl; Marois, René
2009-01-01
One of the most influential theories in visual cognition proposes that attention is necessary to bind different visual features into coherent object percepts (Treisman & Gelade, 1980). While considerable evidence supports a role for attention in perceptual feature binding, whether attention plays a similar function in visual working memory (VWM) remains controversial. To test the attentional requirements of VWM feature binding, here we gave participants an attention-demanding multiple object tracking task during the retention interval of a VWM task. Results show that the tracking task disrupted memory for color-shape conjunctions above and beyond any impairment to working memory for object features, and that this impairment was larger when the VWM stimuli were presented at different spatial locations. These results demonstrate that the role of visuospatial attention in feature binding is not unique to perception, but extends to the working memory of these perceptual representations as well. PMID:19609460
Muthukumaraswamy, Suresh D.; Hibbs, Carina S.; Shapiro, Kimron L.; Bracewell, R. Martyn; Singh, Krish D.; Linden, David E. J.
2011-01-01
The mechanism by which distinct subprocesses in the brain are coordinated is a central conundrum of systems neuroscience. The parietal lobe is thought to play a key role in visual feature integration, and oscillatory activity in the gamma frequency range has been associated with perception of coherent objects and other tasks requiring neural coordination. Here, we examined the neural correlates of integrating mental representations in working memory and hypothesized that parietal gamma activity would be related to the success of cognitive coordination. Working memory is a classic example of a cognitive operation that requires the coordinated processing of different types of information and the contribution of multiple cognitive domains. Using magnetoencephalography (MEG), we report parietal activity in the high gamma (80–100 Hz) range during manipulation of visual and spatial information (colors and angles) in working memory. This parietal gamma activity was significantly higher during manipulation of visual-spatial conjunctions compared with single features. Furthermore, gamma activity correlated with successful performance during the conjunction task but not during the component tasks. Cortical gamma activity in parietal cortex may therefore play a role in cognitive coordination. PMID:21940605
Minimum visual requirements in different occupations in Finland.
Aine, E
1984-01-01
In Finland the employers can individually fix the minimum visual requirements for their personnel in almost every occupation. In transportation, in police and national defence proper eyesight is regarded so important that strict visual requirements for these have been fixed by the Government. The regulations are often more close when accepting the person to the occupation than later on when working. The minimum requirements are mostly stated for visual acuity, colour perception and visual fields. In some occupations the regulations concern also the refractive error of the eyes and possible eye diseases. In aviation the regulations have been stated by the International Civil Aviation Organization ( ICAO ). The minimum visual requirements for a driving license in highway traffic are classed according to the types of motor vehicles. In railways , maritime commerce and national defence the task of the worker determines the specified regulations. The policeman must have a distant visual acuity of 0.5 without eyeglasses in both eyes and nearly normal colour perception when starting the training course.
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K.; Fröhlich, Flavio
2016-01-01
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition. PMID:27025995
Frequency modulation of neural oscillations according to visual task demands.
Wutz, Andreas; Melcher, David; Samaha, Jason
2018-02-06
Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K; Fröhlich, Flavio
2016-03-30
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition.
Louveton, N; McCall, R; Koenig, V; Avanesov, T; Engel, T
2016-05-01
Innovative in-car applications provided on smartphones can deliver real-time alternative mobility choices and subsequently generate visual-manual demand. Prior studies have found that multi-touch gestures such as kinetic scrolling are problematic in this respect. In this study we evaluate three prototype tasks which can be found in common mobile interaction use-cases. In a repeated-measures design, 29 participants interacted with the prototypes in a car-following task within a driving simulator environment. Task completion, driving performance and eye gaze have been analysed. We found that the slider widget used in the filtering task was too demanding and led to poor performance, while kinetic scrolling generated a comparable amount of visual distraction despite it requiring a lower degree of finger pointing accuracy. We discuss how to improve continuous list browsing in a dual-task context. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Computer system evolution requirements for autonomous checkout of exploration vehicles
NASA Technical Reports Server (NTRS)
Davis, Tom; Sklar, Mike
1991-01-01
This study, now in its third year, has had the overall objective and challenge of determining the needed hooks and scars in the initial Space Station Freedom (SSF) system to assure that on-orbit assembly and refurbishment of lunar and Mars spacecraft can be accomplished with the maximum use of automation. In this study automation is all encompassing and includes physical tasks such as parts mating, tool operation, and human visual inspection, as well as non-physical tasks such as monitoring and diagnosis, planning and scheduling, and autonomous visual inspection. Potential tasks for automation include both extravehicular activity (EVA) and intravehicular activity (IVA) events. A number of specific techniques and tools have been developed to determine the ideal tasks to be automated, and the resulting timelines, changes in labor requirements and resources required. The Mars/Phobos exploratory mission developed in FY89, and the Lunar Assembly/Refurbishment mission developed in FY90 and depicted in the 90 Day Study as Option 5, have been analyzed in detailed in recent years. The complete methodology and results are presented in FY89 and FY90 final reports.
Krummenacher, Joseph; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas
2009-03-01
Two experiments compared reaction times (RTs) in visual search for singleton feature targets defined, variably across trials, in either the color or the orientation dimension. Experiment 1 required observers to simply discern target presence versus absence (simple-detection task); Experiment 2 required them to respond to a detection-irrelevant form attribute of the target (compound-search task). Experiment 1 revealed a marked dimensional intertrial effect of 34 ms for an target defined in a changed versus a repeated dimension, and an intertrial target distance effect, with an 4-ms increase in RTs (per unit of distance) as the separation of the current relative to the preceding target increased. Conversely, in Experiment 2, the dimension change effect was markedly reduced (11 ms), while the intertrial target distance effect was markedly increased (11 ms per unit of distance). The results suggest that dimension change/repetition effects are modulated by the amount of attentional focusing required by the task, with space-based attention altering the integration of dimension-specific feature contrast signals at the level of the overall-saliency map.
Cognitive Load in Voice Therapy Carry-Over Exercises.
Iwarsson, Jenny; Morris, David Jackson; Balling, Laura Winther
2017-01-01
The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication. Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire. Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks. The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.
Coherent visualization of spatial data adapted to roles, tasks, and hardware
NASA Astrophysics Data System (ADS)
Wagner, Boris; Peinsipp-Byma, Elisabeth
2012-06-01
Modern crisis management requires that users with different roles and computer environments have to deal with a high volume of various data from different sources. For this purpose, Fraunhofer IOSB has developed a geographic information system (GIS) which supports the user depending on available data and the task he has to solve. The system provides merging and visualization of spatial data from various civilian and military sources. It supports the most common spatial data standards (OGC, STANAG) as well as some proprietary interfaces, regardless if these are filebased or database-based. To set the visualization rules generic Styled Layer Descriptors (SLDs) are used, which are an Open Geospatial Consortium (OGC) standard. SLDs allow specifying which data are shown, when and how. The defined SLDs consider the users' roles and task requirements. In addition it is possible to use different displays and the visualization also adapts to the individual resolution of the display. Too high or low information density is avoided. Also, our system enables users with different roles to work together simultaneously using the same data base. Every user is provided with the appropriate and coherent spatial data depending on his current task. These so refined spatial data are served via the OGC services Web Map Service (WMS: server-side rendered raster maps), or the Web Map Tile Service - (WMTS: pre-rendered and cached raster maps).
Butensky, Samuel D; Sloan, Andrew P; Meyers, Eric; Carmel, Jason B
2017-07-15
Hand function is critical for independence, and neurological injury often impairs dexterity. To measure hand function in people or forelimb function in animals, sensors are employed to quantify manipulation. These sensors make assessment easier and more quantitative and allow automation of these tasks. While automated tasks improve objectivity and throughput, they also produce large amounts of data that can be burdensome to analyze. We created software called Dexterity that simplifies data analysis of automated reaching tasks. Dexterity is MATLAB software that enables quick analysis of data from forelimb tasks. Through a graphical user interface, files are loaded and data are identified and analyzed. These data can be annotated or graphed directly. Analysis is saved, and the graph and corresponding data can be exported. For additional analysis, Dexterity provides access to custom scripts created by other users. To determine the utility of Dexterity, we performed a study to evaluate the effects of task difficulty on the degree of impairment after injury. Dexterity analyzed two months of data and allowed new users to annotate the experiment, visualize results, and save and export data easily. Previous analysis of tasks was performed with custom data analysis, requiring expertise with analysis software. Dexterity made the tools required to analyze, visualize and annotate data easy to use by investigators without data science experience. Dexterity increases accessibility to automated tasks that measure dexterity by making analysis of large data intuitive, robust, and efficient. Copyright © 2017 Elsevier B.V. All rights reserved.
Parkington, Karisa B; Clements, Rebecca J; Landry, Oriane; Chouinard, Philippe A
2015-10-01
We examined how performance on an associative learning task changes in a sample of undergraduate students as a function of their autism-spectrum quotient (AQ) score. The participants, without any prior knowledge of the Japanese language, learned to associate hiragana characters with button responses. In the novel condition, 50 participants learned visual-motor associations without any prior exposure to the stimuli's visual attributes. In the familiar condition, a different set of 50 participants completed a session in which they first became familiar with the stimuli's visual appearance prior to completing the visual-motor association learning task. Participants with higher AQ scores had a clear advantage in the novel condition; the amount of training required reaching learning criterion correlated negatively with AQ. In contrast, participants with lower AQ scores had a clear advantage in the familiar condition; the amount of training required to reach learning criterion correlated positively with AQ. An examination of how each of the AQ subscales correlated with these learning patterns revealed that abilities in visual discrimination-which is known to depend on the visual ventral-stream system-may have afforded an advantage in the novel condition for the participants with the higher AQ scores, whereas abilities in attention switching-which are known to require mechanisms in the prefrontal cortex-may have afforded an advantage in the familiar condition for the participants with the lower AQ scores.
The effect of changing the secondary task in dual-task paradigms for measuring listening effort.
Picou, Erin M; Ricketts, Todd A
2014-01-01
The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.
Task-dependent individual differences in prefrontal connectivity.
Biswal, Bharat B; Eldreth, Dana A; Motes, Michael A; Rypma, Bart
2010-09-01
Recent advances in neuroimaging have permitted testing of hypotheses regarding the neural bases of individual differences, but this burgeoning literature has been characterized by inconsistent results. To test the hypothesis that differences in task demands could contribute to between-study variability in brain-behavior relationships, we had participants perform 2 tasks that varied in the extent of cognitive involvement. We examined connectivity between brain regions during a low-demand vigilance task and a higher-demand digit-symbol visual search task using Granger causality analysis (GCA). Our results showed 1) Significant differences in numbers of frontoparietal connections between low- and high-demand tasks 2) that GCA can detect activity changes that correspond with task-demand changes, and 3) faster participants showed more vigilance-related activity than slower participants, but less visual-search activity. These results suggest that relatively low-demand cognitive performance depends on spontaneous bidirectionally fluctuating network activity, whereas high-demand performance depends on a limited, unidirectional network. The nature of brain-behavior relationships may vary depending on the extent of cognitive demand. High-demand network activity may reflect the extent to which individuals require top-down executive guidance of behavior for successful task performance. Low-demand network activity may reflect task- and performance monitoring that minimizes executive requirements for guidance of behavior.
Task-Dependent Individual Differences in Prefrontal Connectivity
Biswal, Bharat B.; Eldreth, Dana A.; Motes, Michael A.
2010-01-01
Recent advances in neuroimaging have permitted testing of hypotheses regarding the neural bases of individual differences, but this burgeoning literature has been characterized by inconsistent results. To test the hypothesis that differences in task demands could contribute to between-study variability in brain-behavior relationships, we had participants perform 2 tasks that varied in the extent of cognitive involvement. We examined connectivity between brain regions during a low-demand vigilance task and a higher-demand digit–symbol visual search task using Granger causality analysis (GCA). Our results showed 1) Significant differences in numbers of frontoparietal connections between low- and high-demand tasks 2) that GCA can detect activity changes that correspond with task-demand changes, and 3) faster participants showed more vigilance-related activity than slower participants, but less visual-search activity. These results suggest that relatively low-demand cognitive performance depends on spontaneous bidirectionally fluctuating network activity, whereas high-demand performance depends on a limited, unidirectional network. The nature of brain-behavior relationships may vary depending on the extent of cognitive demand. High-demand network activity may reflect the extent to which individuals require top-down executive guidance of behavior for successful task performance. Low-demand network activity may reflect task- and performance monitoring that minimizes executive requirements for guidance of behavior. PMID:20064942
What you say matters: exploring visual-verbal interactions in visual working memory.
Mate, Judit; Allen, Richard J; Baqués, Josep
2012-01-01
The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.
Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa
2017-02-01
The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.
Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah
2014-11-19
The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.
No psychological effect of color context in a low level vision task
Pedley, Adam; Wade, Alex R
2013-01-01
Background: A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Methods: Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task. Results: A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η 2 = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. Discussion: We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas. PMID:25075280
No psychological effect of color context in a low level vision task.
Pedley, Adam; Wade, Alex R
2013-01-01
A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task. A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η (2) = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas.
The effect of spectral filters on visual search in stroke patients.
Beasley, Ian G; Davies, Leon N
2013-01-01
Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.
Visual task performance using a monocular see-through head-mounted display (HMD) while walking.
Mustonen, Terhi; Berg, Mikko; Kaistinen, Jyrki; Kawai, Takashi; Häkkinen, Jukka
2013-12-01
A monocular see-through head-mounted display (HMD) allows the user to view displayed information while simultaneously interacting with the surrounding environment. This configuration lets people use HMDs while they are moving, such as while walking. However, sharing attention between the display and environment can compromise a person's performance in any ongoing task, and controlling one's gait may add further challenges. In this study, the authors investigated how the requirements of HMD-administered visual tasks altered users' performance while they were walking. Twenty-four university students completed 3 cognitive tasks (high- and low-working memory load, visual vigilance) on an HMD while seated and while simultaneously performing a paced walking task in a controlled environment. The results show that paced walking worsened performance (d', reaction time) in all HMD-administered tasks, but visual vigilance deteriorated more than memory performance. The HMD-administered tasks also worsened walking performance (speed, path overruns) in a manner that varied according to the overall demands of the task. These results suggest that people's ability to process information displayed on an HMD may worsen while they are in motion. Furthermore, the use of an HMD can critically alter a person's natural performance, such as their ability to guide and control their gait. In particular, visual tasks that involve constant monitoring of the HMD should be avoided. These findings highlight the need for careful consideration of the type and difficulty of information that can be presented through HMDs while still letting the user achieve an acceptable overall level of performance in various contexts of use. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Model of rhythmic ball bouncing using a visually controlled neural oscillator.
Avrin, Guillaume; Siegler, Isabelle A; Makarov, Maria; Rodriguez-Ayerbe, Pedro
2017-10-01
The present paper investigates the sensory-driven modulations of central pattern generator dynamics that can be expected to reproduce human behavior during rhythmic hybrid tasks. We propose a theoretical model of human sensorimotor behavior able to account for the observed data from the ball-bouncing task. The novel control architecture is composed of a Matsuoka neural oscillator coupled with the environment through visual sensory feedback. The architecture's ability to reproduce human-like performance during the ball-bouncing task in the presence of perturbations is quantified by comparison of simulated and recorded trials. The results suggest that human visual control of the task is achieved online. The adaptive behavior is made possible by a parametric and state control of the limit cycle emerging from the interaction of the rhythmic pattern generator, the musculoskeletal system, and the environment. NEW & NOTEWORTHY The study demonstrates that a behavioral model based on a neural oscillator controlled by visual information is able to accurately reproduce human modulations in a motor action with respect to sensory information during the rhythmic ball-bouncing task. The model attractor dynamics emerging from the interaction between the neuromusculoskeletal system and the environment met task requirements, environmental constraints, and human behavioral choices without relying on movement planning and explicit internal models of the environment. Copyright © 2017 the American Physiological Society.
Azulay, Haim; Striem, Ella; Amedi, Amir
2009-05-01
People tend to close their eyes when trying to retrieve an event or a visual image from memory. However the brain mechanisms behind this phenomenon remain poorly understood. Recently, we showed that during visual mental imagery, auditory areas show a much more robust deactivation than during visual perception. Here we ask whether this is a special case of a more general phenomenon involving retrieval of intrinsic, internally stored information, which would result in crossmodal deactivations in other sensory cortices which are irrelevant to the task at hand. To test this hypothesis, a group of 9 sighted individuals were scanned while performing a memory retrieval task for highly abstract words (i.e., with low imaginability scores). We also scanned a group of 10 congenitally blind, which by definition do not have any visual imagery per se. In sighted subjects, both auditory and visual areas were robustly deactivated during memory retrieval, whereas in the blind the auditory cortex was deactivated while visual areas, shown previously to be relevant for this task, presented a positive BOLD signal. These results suggest that deactivation may be most prominent in task-irrelevant sensory cortices whenever there is a need for retrieval or manipulation of internally stored representations. Thus, there is a task-dependent balance of activation and deactivation that might allow maximization of resources and filtering out of non relevant information to enable allocation of attention to the required task. Furthermore, these results suggest that the balance between positive and negative BOLD might be crucial to our understanding of a large variety of intrinsic and extrinsic tasks including high-level cognitive functions, sensory processing and multisensory integration.
Balasubramaniam, Ramesh
2014-01-01
Sensory information from our eyes, skin and muscles helps guide and correct balance. Less appreciated, however, is that delays in the transmission of sensory information between our eyes, limbs and central nervous system can exceed several 10s of milliseconds. Investigating how these time-delayed sensory signals influence balance control is central to understanding the postural system. Here, we investigate how delayed visual feedback and cognitive performance influence postural control in healthy young and older adults. The task required that participants position their center of pressure (COP) in a fixed target as accurately as possible without visual feedback about their COP location (eyes-open balance), or with artificial time delays imposed on visual COP feedback. On selected trials, the participants also performed a silent arithmetic task (cognitive dual task). We separated COP time series into distinct frequency components using low and high-pass filtering routines. Visual feedback delays affected low frequency postural corrections in young and older adults, with larger increases in postural sway noted for the group of older adults. In comparison, cognitive performance reduced the variability of rapid center of pressure displacements in young adults, but did not alter postural sway in the group of older adults. Our results demonstrate that older adults prioritize vision to control posture. This visual reliance persists even when feedback about the task is delayed by several hundreds of milliseconds. PMID:24614576
Does It Really Matter Where You Look When Walking on Stairs? Insights from a Dual-Task Study
Miyasike-daSilva, Veronica; McIlroy, William E.
2012-01-01
Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field. PMID:22970297
Flow visualization of CFD using graphics workstations
NASA Technical Reports Server (NTRS)
Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon
1987-01-01
High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.
High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL.
Stone, John E; Messmer, Peter; Sisneros, Robert; Schulten, Klaus
2016-05-01
Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications.
High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL
Stone, John E.; Messmer, Peter; Sisneros, Robert; Schulten, Klaus
2016-01-01
Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications. PMID:27747137
Are children with low vision adapted to the visual environment in classrooms of mainstream schools?
Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi
2018-02-01
The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. The medical records of 110 children (5-17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60-6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools.
The evolution of meaning: spatio-temporal dynamics of visual object recognition.
Clarke, Alex; Taylor, Kirsten I; Tyler, Lorraine K
2011-08-01
Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.
Völter, Christoph J; Call, Josep
2012-09-01
What kind of information animals use when solving problems is a controversial topic. Previous research suggests that, in some situations, great apes prefer to use causally relevant cues over arbitrary ones. To further examine to what extent great apes are able to use information about causal relations, we presented three different puzzle box problems to the four nonhuman great ape species. Of primary interest here was a comparison between one group of apes that received visual access to the functional mechanisms of the puzzle boxes and one group that did not. Apes' performance in the first two, less complex puzzle boxes revealed that they are able to solve such problems by means of trial-and-error learning, requiring no information about the causal structure of the problem. However, visual inspection of the functional mechanisms of the puzzle boxes reduced the amount of time needed to solve the problems. In the case of the most complex problem, which required the use of a crank, visual feedback about what happened when the handle of the crank was turned was necessary for the apes to solve the task. Once the solution was acquired, however, visual feedback was no longer required. We conclude that visual feedback about the consequences of their actions helps great apes to solve complex problems. As the crank task matches the basic requirements of vertical string pulling in birds, the present results are discussed in light of recent findings with corvids.
Effects of Hearing Status and Sign Language Use on Working Memory
Sarchet, Thomastine; Trani, Alexandra
2016-01-01
Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those requiring retention of serial order. Deaf individuals who are native signers, meanwhile, have been found to score higher on visual-spatial memory tasks than on verbal-sequential tasks and higher on some visual-spatial tasks than hearing nonsigners. However, hearing status and preferred language modality (signed or spoken) frequently are confounded in such studies. That situation is resolved in the present study by including deaf students who use spoken language and sign language interpreting students (hearing signers) as well as deaf signers and hearing nonsigners. Three complex memory span tasks revealed overall advantages for hearing signers and nonsigners over both deaf signers and deaf nonsigners on 2 tasks involving memory for verbal stimuli (letters). There were no differences among the groups on the task involving visual-spatial stimuli. The results are consistent with and extend recent findings concerning the effects of hearing status and language on memory and are discussed in terms of language modality, hearing status, and cognitive abilities among deaf and hearing individuals. PMID:26755684
Retro-cue benefits in working memory without sustained focal attention.
Rerko, Laura; Souza, Alessandra S; Oberauer, Klaus
2014-07-01
In working memory (WM) tasks, performance can be boosted by directing attention to one memory object: When a retro-cue in the retention interval indicates which object will be tested, responding is faster and more accurate (the retro-cue benefit). We tested whether the retro-cue benefit in WM depends on sustained attention to the cued object by inserting an attention-demanding interruption task between the retro-cue and the memory test. In the first experiment, the interruption task required participants to shift their visual attention away from the cued representation and to a visual classification task on colors. In the second and third experiments, the interruption task required participants to shift their focal attention within WM: Attention was directed away from the cued representation by probing another representation from the memory array prior to probing the cued object. The retro-cue benefit was not attenuated by shifts of perceptual attention or by shifts of attention within WM. We concluded that sustained attention is not needed to maintain the cued representation in a state of heightened accessibility.
Brain-computer interface on the basis of EEG system Encephalan
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir; Badarin, Artem; Nedaivozov, Vladimir; Kirsanov, Daniil; Hramov, Alexander
2018-04-01
We have proposed brain-computer interface (BCI) for the estimation of the brain response on the presented visual tasks. Proposed BCI is based on the EEG recorder Encephalan-EEGR-19/26 (Medicom MTD, Russia) supplemented by a special home-made developed acquisition software. BCI is tested during experimental session while subject is perceiving the bistable visual stimuli and classifying them according to the interpretation. We have subjected the participant to the different external conditions and observed the significant decrease in the response, associated with the perceiving the bistable visual stimuli, during the presence of distraction. Based on the obtained results we have proposed possibility to use of BCI for estimation of the human alertness during solving the tasks required substantial visual attention.
Evidence of different underlying processes in pattern recall and decision-making.
Gorman, Adam D; Abernethy, Bruce; Farrow, Damian
2015-01-01
The visual search characteristics of expert and novice basketball players were recorded during pattern recall and decision-making tasks to determine whether the two tasks shared common visual-perceptual processing strategies. The order in which participants entered the pattern elements in the recall task was also analysed to further examine the nature of the visual-perceptual strategies and the relative emphasis placed upon particular pattern features. The experts demonstrated superior performance across the recall and decision-making tasks [see also Gorman, A. D., Abernethy, B., & Farrow, D. (2012). Classical pattern recall tests and the prospective nature of expert performance. The Quarterly Journal of Experimental Psychology, 65, 1151-1160; Gorman, A. D., Abernethy, B., & Farrow, D. (2013a). Is the relationship between pattern recall and decision-making influenced by anticipatory recall? The Quarterly Journal of Experimental Psychology, 66, 2219-2236)] but a number of significant differences in the visual search data highlighted disparities in the processing strategies, suggesting that recall skill may utilize different underlying visual-perceptual processes than those required for accurate decision-making performance in the natural setting. Performance on the recall task was characterized by a proximal-to-distal order of entry of the pattern elements with participants tending to enter the players located closest to the ball carrier earlier than those located more distal to the ball carrier. The results provide further evidence of the underlying perceptual processes employed by experts when extracting visual information from complex and dynamic patterns.
Keenan, Kevin G; Huddleston, Wendy E; Ernest, Bradley E
2017-11-01
The purpose of the study was to determine the visual strategies used by older adults during a pinch grip task and to assess the relations between visual strategy, deficits in attention, and increased force fluctuations in older adults. Eye movements of 23 older adults (>65 yr) were monitored during a low-force pinch grip task while subjects viewed three common visual feedback displays. Performance on the Grooved Pegboard test and an attention task (which required no concurrent hand movements) was also measured. Visual strategies varied across subjects and depended on the type of visual feedback provided to the subjects. First, while viewing a high-gain compensatory feedback display (horizontal bar moving up and down with force), 9 of 23 older subjects adopted a strategy of performing saccades during the task, which resulted in 2.5 times greater force fluctuations in those that exhibited saccades compared with those who maintained fixation near the target line. Second, during pursuit feedback displays (force trace moving left to right across screen and up and down with force), all subjects exhibited multiple saccades, and increased force fluctuations were associated ( r s = 0.6; P = 0.002) with fewer saccades during the pursuit task. Also, decreased low-frequency (<4 Hz) force fluctuations and Grooved Pegboard times were significantly related ( P = 0.033 and P = 0.005, respectively) with higher (i.e., better) attention z scores. Comparison of these results with our previously published results in young subjects indicates that saccadic eye movements and attention are related to force control in older adults. NEW & NOTEWORTHY The significant contributions of the study are the addition of eye movement data and an attention task to explain differences in hand motor control across different visual displays in older adults. Older participants used different visual strategies across varying feedback displays, and saccadic eye movements were related with motor performance. In addition, those older individuals with deficits in attention had impaired motor performance on two different hand motor control tasks, including the Grooved Pegboard test. Copyright © 2017 the American Physiological Society.
Reasoning and Memory: People Make Varied Use of the Information Available in Working Memory
ERIC Educational Resources Information Center
Hardman, Kyle O.; Cowan, Nelson
2016-01-01
Working memory (WM) is used for storing information in a highly accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information to perform optimally on the task. In this study, we used visual WM tasks that had…
The influence of artificial scotomas on eye movements during visual search.
Cornelissen, Frans W; Bruin, Klaas J; Kooijman, Aart C
2005-01-01
Fixation durations are normally adapted to the difficulty of the foveal analysis task. We examine to what extent artificial central and peripheral visual field defects interfere with this adaptation process. Subjects performed a visual search task while their eye movements were registered. The latter were used to drive a real-time gaze-dependent display that was used to create artificial central and peripheral visual field defects. Recorded eye movements were used to determine saccadic amplitude, number of fixations, fixation durations, return saccades, and changes in saccade direction. For central defects, although fixation duration increased with the size of the absolute central scotoma, this increase was too small to keep recognition performance optimal, evident from an associated increase in the rate of return saccades. Providing a relatively small amount of visual information in the central scotoma did substantially reduce subjects' search times but not their fixation durations. Surprisingly, reducing the size of the tunnel also prolonged fixation duration for peripheral defects. This manipulation also decreased the rate of return saccades, suggesting that the fixations were prolonged beyond the duration required by the foveal task. Although we find that adaptation of fixation duration to task difficulty clearly occurs in the presence of artificial scotomas, we also find that such field defects may render the adaptation suboptimal for the task at hand. Thus, visual field defects may not only hinder vision by limiting what the subject sees of the environment but also by limiting the visual system's ability to program efficient eye movements. We speculate this is because of how visual field defects bias the balance between saccade generation and fixation stabilization.
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
Normal Performance in Non-Visual Social Cognition Tasks in Women with Turner Syndrome.
Anaki, David; Zadikov-Mor, Tal; Gepstein, Vardit; Hochberg, Ze'ev
2018-01-01
Turner syndrome (TS) is a chromosomal disorder in women resulting from a partial or complete absence of the X chromosome. In addition to physical and hormonal dysfunctions, along with a unique neurocognitive profile, women with TS are reported to suffer from social functioning difficulties. Yet, it is unclear whether these difficulties stem from impairments in social cognition per se or from other deficits that characterize TS but are not specific to social cognition. Previous research that has probed social functioning in TS is equivocal regarding the source of these psychosocial problems since they have mainly used tasks that were dependent on visual-spatial skills, which are known to be compromised in TS. In the present study, we tested 26 women with TS and 26 matched participants on three social cognition tasks that did not require any visual-spatial capacities but rather relied on auditory-verbal skills. The results revealed that in all three tasks the TS participants did not differ from their control counterparts. The same TS cohort was found, in an earlier study, to be impaired, relative to controls, in other social cognition tasks that were dependent on visual-spatial skills. Taken together these findings suggest that the social problems, documented in TS, may be related to non-specific spatial-visual factors that affect their social cognition skills.
A comparison of the vigilance performance of men and women using a simulated radar task.
DOT National Transportation Integrated Search
1978-03-01
The present study examined the question of possible sex differences in the ability to sustain attention to a complex monitoring task requiring only a detection response to critical stimulus changes. The visual display was designed to approximate a fu...
Divided attention and mental effort after severe traumatic brain injury.
Azouvi, Philippe; Couillet, Josette; Leclercq, Michel; Martin, Yves; Asloun, Sybille; Rousseaux, Marc
2004-01-01
The aim of this study was to assess dual-task performance in TBI patients, under different experimental conditions, with or without explicit emphasis on one of two tasks. Results were compared with measurement of the subjective mental effort required to perform each task. Forty-three severe TBI patients at the subacute or chronic phase performed two tasks under single- and dual-task conditions: (a) random generation; (b) visual go-no go reaction time task. Three dual-task conditions were given, requiring either to consider both tasks as equally important or to focus preferentially on one of them. Patients were compared to matched controls. Subjective mental effort was rated on a visual analogic scale. TBI patients showed a disproportionate increase in reaction time in the go-no go task under the dual-task condition. However, they were just as able as controls to adapt performance to the specific instructions about the task to be emphasised. Patients reported significantly higher subjective mental effort, but the variation of mental effort according to task condition was similar to that of controls. These results suggest that the divided attention deficit of TBI patients is related to a reduction in available processing resources rather than an impairment of strategic processes responsible for attentional allocation and switching. The higher level of subjective mental effort may explain why TBI patients frequently complain of mental fatigue, although this subjective complaint seems to be relatively independent of cognitive impairment.
Visual short-term memory load reduces retinotopic cortex response to contrast.
Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli
2012-11-01
Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.
Visual short-term memory always requires general attention.
Morey, Candice C; Bieler, Malte
2013-02-01
The role of attention in visual memory remains controversial; while some evidence has suggested that memory for binding between features demands no more attention than does memory for the same features, other evidence has indicated cognitive costs or mnemonic benefits for explicitly attending to bindings. We attempted to reconcile these findings by examining how memory for binding, for features, and for features during binding is affected by a concurrent attention-demanding task. We demonstrated that performing a concurrent task impairs memory for as few as two visual objects, regardless of whether each object includes one or more features. We argue that this pattern of results reflects an essential role for domain-general attention in visual memory, regardless of the simplicity of the to-be-remembered stimuli. We then discuss the implications of these findings for theories of visual working memory.
Effects of motor congruence on visual working memory.
Quak, Michel; Pecher, Diane; Zeelenberg, Rene
2014-10-01
Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.
Identifying cognitive distraction using steering wheel reversal rates.
Kountouriotis, Georgios K; Spyridakos, Panagiotis; Carsten, Oliver M J; Merat, Natasha
2016-11-01
The influence of driver distraction on driving performance is not yet well understood, but it can have detrimental effects on road safety. In this study, we examined the effects of visual and non-visual distractions during driving, using a high-fidelity driving simulator. The visual task was presented either at an offset angle on an in-vehicle screen, or on the back of a moving lead vehicle. Similar to results from previous studies in this area, non-visual (cognitive) distraction resulted in improved lane keeping performance and increased gaze concentration towards the centre of the road, compared to baseline driving, and further examination of the steering control metrics indicated an increase in steering wheel reversal rates, steering wheel acceleration, and steering entropy. We show, for the first time, that when the visual task is presented centrally, drivers' lane deviation reduces (similar to non-visual distraction), whilst measures of steering control, overall, indicated more steering activity, compared to baseline. When using a visual task that required the diversion of gaze to an in-vehicle display, but without a manual element, lane keeping performance was similar to baseline driving. Steering wheel reversal rates were found to adequately tease apart the effects of non-visual distraction (increase of 0.5° reversals) and visual distraction with offset gaze direction (increase of 2.5° reversals). These findings are discussed in terms of steering control during different types of in-vehicle distraction, and the possible role of manual interference by distracting secondary tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Memory-Based Attention Capture when Multiple Items Are Maintained in Visual Working Memory
Hollingworth, Andrew; Beck, Valerie M.
2016-01-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search—an index of VWM guidance—is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when two colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. PMID:27123681
Comparing visual search and eye movements in bilinguals and monolinguals
Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.
2017-01-01
Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116
Choosing colors for map display icons using models of visual search.
Shive, Joshua; Francis, Gregory
2013-04-01
We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.
Stages of functional processing and the bihemispheric recognition of Japanese Kana script.
Yoshizaki, K
2000-04-01
Two experiments were carried out in order to examine the effects of functional steps on the benefits of interhemispheric integration. The purpose of Experiment 1 was to investigate the validity of the Banich (1995a) model, where the benefits of interhemispheric processing increase as the task involves more functional steps. The 16 right-handed subjects were given two types of Hiragana-Katakana script matching tasks. One was the Name Identity (NI) task, and the other was the vowel matching (VM) task, which involved more functional steps compared to the NI task. The VM task required subjects to make a decision whether or not a pair of Katakana-Hiragana scripts had a common vowel. In both tasks, a pair of Kana scripts (Katakana-Hiragana scripts) was tachistoscopically presented in the unilateral visual fields or the bilateral visual fields, where each letter was presented in each visual field. A bilateral visual fields advantage (BFA) was found in both tasks, and the size of this did not differ between the tasks, suggesting that these findings did not support the Banich model. The purpose of Experiment 2 was to examine the effects of imbalanced processing load between the hemispheres on the benefits of interhemispheric integration. In order to manipulate the balance of processing load across the hemispheres, the revised vowel matching (r-VM) task was developed by amending the VM task. The r-VM task was the same as the VM task in Experiment 1, except that a script that has only vowel sound was presented as a counterpart of a pair of Kana scripts. The 24 right-handed subjects were given the r-VM and NI tasks. The results showed that although a BFA showed up in the NI task, it did not in the r-VM task. These results suggested that the balance of processing load between hemispheres would have an influence on the bilateral hemispheric processing.
Yoo, Seung-Woo; Lee, Inah
2017-01-01
How visual scene memory is processed differentially by the upstream structures of the hippocampus is largely unknown. We sought to dissociate functionally the lateral and medial subdivisions of the entorhinal cortex (LEC and MEC, respectively) in visual scene-dependent tasks by temporarily inactivating the LEC and MEC in the same rat. When the rat made spatial choices in a T-maze using visual scenes displayed on LCD screens, the inactivation of the MEC but not the LEC produced severe deficits in performance. However, when the task required the animal to push a jar or to dig in the sand in the jar using the same scene stimuli, the LEC but not the MEC became important. Our findings suggest that the entorhinal cortex is critical for scene-dependent mnemonic behavior, and the response modality may interact with a sensory modality to determine the involvement of the LEC and MEC in scene-based memory tasks. DOI: http://dx.doi.org/10.7554/eLife.21543.001 PMID:28169828
Interhemispheric interaction expands attentional capacity in an auditory selective attention task.
Scalf, Paige E; Banich, Marie T; Erickson, Andrew B
2009-04-01
Previous work from our laboratory indicates that interhemispheric interaction (IHI) functionally increases the attentional capacity available to support performance on visual tasks (Banich in The asymmetrical brain, pp 261-302, 2003). Because manipulations of both computational complexity and selection demand alter the benefits of IHI to task performance, we argue that IHI may be a general strategy for meeting increases in attentional demand. Other researchers, however, have suggested that the apparent benefits of IHI to attentional capacity are an epiphenomenon of the organization of the visual system (Fecteau and Enns in Neuropsychologia 43:1412-1428, 2005; Marsolek et al. in Neuropsychologia 40:1983-1999, 2002). In the current experiment, we investigate whether IHI increases attentional capacity outside the visual system by manipulating the selection demands of an auditory temporal pattern-matching task. We find that IHI expands attentional capacity in the auditory system. This suggests that the benefits of requiring IHI derive from a functional increase in attentional capacity rather than the organization of a specific sensory modality.
The detrimental influence of attention on time-to-contact perception.
Baurès, Robin; Balestra, Marianne; Rosito, Maxime; VanRullen, Rufin
2018-04-23
To which extent is attention necessary to estimate the time-to-contact (TTC) of a moving object, that is, determining when the object will reach a specific point? While numerous studies have aimed at determining the visual cues and gaze strategy that allow this estimation, little is known about if and how attention is involved or required in this process. To answer this question, we carried out an experiment in which the participants estimated the TTC of a moving ball, either alone (single-task condition) or concurrently with a Rapid Serial Visual Presentation task embedded within the ball (dual-task condition). The results showed that participants had a better estimation when attention was driven away from the TTC task. This suggests that drawing attention away from the TTC estimation limits cognitive interference, intrusion of knowledge, or expectations that significantly modify the visually-based TTC estimation, and argues in favor of a limited attention to correctly estimate the TTC.
The Fleeting Nature of Sex Differences in Spatial Ability.
ERIC Educational Resources Information Center
Alderton, David L.
Gender differences were examined on three computer-administered spatial processing tasks: (1) the Intercept task, requiring processing dynamic or moving figures; (2) the mental rotation test, employing rotated asymmetric polygons; and (3) the integrating details test, in which subjects performed a complex visual synthesis. Participants were about…
Productivity associated with visual status of computer users.
Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W
2004-01-01
The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.
Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions
Morgan, Helen M.; Jackson, Margaret C.; van Koningsbruggen, Martijn G.; Shapiro, Kimron L.; Linden, David E.J.
2013-01-01
In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. PMID:22483548
Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions.
Morgan, Helen M; Jackson, Margaret C; van Koningsbruggen, Martijn G; Shapiro, Kimron L; Linden, David E J
2013-03-01
In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. Copyright © 2013 Elsevier Inc. All rights reserved.
Bouncing Ball with a Uniformly Varying Velocity in a Metronome Synchronization Task.
Huang, Yingyu; Gu, Li; Yang, Junkai; Wu, Xiang
2017-09-21
Sensorimotor synchronization (SMS), a fundamental human ability to coordinate movements with external rhythms, has long been thought to be modality specific. In the canonical metronome synchronization task that requires tapping a finger along with an isochronous sequence, a well-established finding is that synchronization is much more stable to an auditory sequence consisting of auditory tones than to a visual sequence consisting of visual flashes. However, recent studies have shown that periodically moving visual stimuli can substantially improve synchronization compared with visual flashes. In particular, synchronization of a visual bouncing ball that has a uniformly varying velocity was found to be not less stable than synchronization of auditory tones. Here, the current protocol describes the application of the bouncing ball with a uniformly varying velocity in a metronome synchronization task. The usage of the bouncing ball in sequences with different inter-onset intervals (IOI) is included. The representative results illustrate synchronization performance of the bouncing ball, as compared with the performances of auditory tones and visual flashes. Given its comparable synchronization performance to that of auditory tones, the bouncing ball is of particular importance for addressing the current research topic of whether modality-specific mechanisms underlay SMS.
Pleasant music improves visual attention in patients with unilateral neglect after stroke.
Chen, Mei-Ching; Tsai, Pei-Luen; Huang, Yu-Ting; Lin, Keh-Chung
2013-01-01
To investigate whether listening to pleasant music improves visual attention to and awareness of contralesional stimuli in patients with unilateral neglect after stroke. A within-subject design was used with 19 participants with unilateral neglect following a right hemisphere stroke. Participants were tested in three conditions (pleasant music, unpleasant music and white noise) within 1 week. All musical pieces were chosen by the participants. In each condition, participants were asked to complete three sub-tests of the Behavioural Inattention Test (the Star Cancellation Test, the Line Bisection Test and the Picture Scanning test) and a visual exploration task with everyday scenes. Eye movements in the visual exploration task were recorded simultaneously. Mood and arousal induced by different auditory stimuli were assessed using visual analogue scales, heart rate and galvanic skin response. Compared with unpleasant music and white noise, participants rated their moods as more positive and arousal as higher with pleasant music, but also showed significant improvement on all tasks and eye movement data, except the Line Bisection Test. The findings suggest that pleasant music can improve visual attention in patients with unilateral neglect after stroke. Additional research using randomized controlled trials is required to validate these findings.
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2013-02-01
A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.
“Global” visual training and extent of transfer in amblyopic macaque monkeys
Kiorpes, Lynne; Mangal, Paul
2015-01-01
Perceptual learning is gaining acceptance as a potential treatment for amblyopia in adults and children beyond the critical period. Many perceptual learning paradigms result in very specific improvement that does not generalize beyond the training stimulus, closely related stimuli, or visual field location. To be of use in amblyopia, a less specific effect is needed. To address this problem, we designed a more general training paradigm intended to effect improvement in visual sensitivity across tasks and domains. We used a “global” visual stimulus, random dot motion direction discrimination with 6 training conditions, and tested for posttraining improvement on a motion detection task and 3 spatial domain tasks (contrast sensitivity, Vernier acuity, Glass pattern detection). Four amblyopic macaques practiced the motion discrimination with their amblyopic eye for at least 20,000 trials. All showed improvement, defined as a change of at least a factor of 2, on the trained task. In addition, all animals showed improvements in sensitivity on at least some of the transfer test conditions, mainly the motion detection task; transfer to the spatial domain was inconsistent but best at fine spatial scales. However, the improvement on the transfer tasks was largely not retained at long-term follow-up. Our generalized training approach is promising for amblyopia treatment, but sustaining improved performance may require additional intervention. PMID:26505868
Karimi, D; Mondor, T A; Mann, D D
2008-01-01
The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.
Are children with low vision adapted to the visual environment in classrooms of mainstream schools?
Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi
2018-01-01
Purpose: The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. Methods: The medical records of 110 children (5–17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). Results: The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60–6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Conclusion: Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools. PMID:29380777
The levels of perceptual processing and the neural correlates of increasing subjective visibility.
Binder, Marek; Gociewicz, Krzysztof; Windey, Bert; Koculak, Marcin; Finc, Karolina; Nikadon, Jan; Derda, Monika; Cleeremans, Axel
2017-10-01
According to the levels-of-processing hypothesis, transitions from unconscious to conscious perception may depend on stimulus processing level, with more gradual changes for low-level stimuli and more dichotomous changes for high-level stimuli. In an event-related fMRI study we explored this hypothesis using a visual backward masking procedure. Task requirements manipulated level of processing. Participants reported the magnitude of the target digit in the high-level task, its color in the low-level task, and rated subjective visibility of stimuli using the Perceptual Awareness Scale. Intermediate stimulus visibility was reported more frequently in the low-level task, confirming prior behavioral results. Visible targets recruited insulo-fronto-parietal regions in both tasks. Task effects were observed in visual areas, with higher activity in the low-level task across all visibility levels. Thus, the influence of level of processing on conscious perception may be mediated by attentional modulation of activity in regions representing features of consciously experienced stimuli. Copyright © 2017 Elsevier Inc. All rights reserved.
Auditory short-term memory activation during score reading.
Simoens, Veerle L; Tervaniemi, Mari
2013-01-01
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.
Auditory Short-Term Memory Activation during Score Reading
Simoens, Veerle L.; Tervaniemi, Mari
2013-01-01
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487
Worden, Timothy A; Mendes, Matthew; Singh, Pratham; Vallis, Lori Ann
2016-10-01
Successful planning and execution of motor strategies while concurrently performing a cognitive task has been previously examined, but unfortunately the varied and numerous cognitive tasks studied has limited our fundamental understanding of how the central nervous system successfully integrates and executes these tasks simultaneously. To gain a better understanding of these mechanisms we used a set of cognitive tasks requiring similar central executive function processes and response outputs but requiring different perceptual mechanisms to perform the motor task. Thirteen healthy young adults (20.6±1.6years old) were instrumented with kinematic markers (60Hz) and completed 5 practice, 10 single-task obstacle walking trials and two 40 trial experimental blocks. Each block contained 20 trials of seated (single-task) trials followed by 20 cognitive and obstacle (30% lower leg length) crossing trials (dual-task). Blocks were randomly presented and included either an auditory Stroop task (AST; central interference only) or a visual Stroop task (VST; combined central and structural interference). Higher accuracy rates and shorter response times were observed for the VST versus AST single-task trials (p<0.05). Conversely, for the obstacle stepping performance, larger dual task costs were observed for the VST as compared to the AST for clearance measures (the VST induced larger clearance values for both the leading and trailing feet), indicating VST tasks caused greater interference for obstacle crossing (p<0.05). These results supported the hypothesis that structural interference has a larger effect on motor performance in a dual-task situation compared to cognitive tasks that pose interference at only the central processing stage. Copyright © 2016 Elsevier B.V. All rights reserved.
Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands
Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.
2013-01-01
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203
The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words
Hoedemaker, Renske S.; Gordon, Peter C.
2016-01-01
In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394
The onset and time course of semantic priming during rapid recognition of visual words.
Hoedemaker, Renske S; Gordon, Peter C
2017-05-01
In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
High-level, but not low-level, motion perception is impaired in patients with schizophrenia.
Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia
2013-01-01
Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.
Attentional demands of movement observation as tested by a dual task approach.
Saucedo Marquez, Cinthia M; Ceux, Tanja; Wenderoth, Nicole
2011-01-01
Movement observation (MO) has been shown to activate the motor cortex of the observer as indicated by an increase of corticomotor excitability for muscles involved in the observed actions. Moreover, behavioral work has strongly suggested that this process occurs in a near-automatic manner. Here we further tested this proposal by applying transcranial magnetic stimulation (TMS) when subjects observed how an actor lifted objects of different weights as a single or a dual task. The secondary task was either an auditory discrimination task (experiment 1) or a visual discrimination task (experiment 2). In experiment 1, we found that corticomotor excitability reflected the force requirements indicated in the observed movies (i.e. higher responses when the actor had to apply higher forces). Interestingly, this effect was found irrespective of whether MO was performed as a single or a dual task. By contrast, no such systematic modulations of corticomotor excitability were observed in experiment 2 when visual distracters were present. We conclude that interference effects might arise when MO is performed while competing visual stimuli are present. However, when a secondary task is situated in a different modality, neural responses are in line with the notion that the observers motor system responds in a near-automatic manner. This suggests that MO is a task with very low cognitive demands which might be a valuable supplement for rehabilitation training, particularly, in the acute phase after the incident or in patients suffering from attention deficits. However, it is important to keep in mind that visual distracters might interfere with the neural response in M1.
Coding Local and Global Binary Visual Features Extracted From Video Sequences.
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.
Coding Local and Global Binary Visual Features Extracted From Video Sequences
NASA Astrophysics Data System (ADS)
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.
Visual and motion cueing in helicopter simulation
NASA Technical Reports Server (NTRS)
Bray, R. S.
1985-01-01
Early experience in fixed-cockpit simulators, with limited field of view, demonstrated the basic difficulties of simulating helicopter flight at the level of subjective fidelity required for confident evaluation of vehicle characteristics. More recent programs, utilizing large-amplitude cockpit motion and a multiwindow visual-simulation system have received a much higher degree of pilot acceptance. However, none of these simulations has presented critical visual-flight tasks that have been accepted by the pilots as the full equivalent of flight. In this paper, the visual cues presented in the simulator are compared with those of flight in an attempt to identify deficiencies that contribute significantly to these assessments. For the low-amplitude maneuvering tasks normally associated with the hover mode, the unique motion capabilities of the Vertical Motion Simulator (VMS) at Ames Research Center permit nearly a full representation of vehicle motion. Especially appreciated in these tasks are the vertical-acceleration responses to collective control. For larger-amplitude maneuvering, motion fidelity must suffer diminution through direct attenuation through high-pass filtering washout of the computer cockpit accelerations or both. Experiments were conducted in an attempt to determine the effects of these distortions on pilot performance of height-control tasks.
Farzandipour, Mehrdad; Meidani, Zahra; Riazi, Hossein; Sadeqi Jabali, Monireh
2018-09-01
There are various approaches to evaluating the usability of electronic medical record (EMR) systems. User perspectives are an integral part of evaluation. Usability evaluations efficiently and effectively contribute to user-centered design and supports tasks and increase user satisfaction. This study determined the main usability requirements for EMRs by means of an end-user survey. A mixed-method strategy was conducted in three phases. A qualitative approach was employed to collect and formulate EMR usability requirements using the focus group method and the modified Delphi technique. Classic Delphi technique was used to evaluate the proposed requirements among 380 end-users in Iran. The final list of EMR usability requirements was verified and included 163 requirements divided into nine groups. The highest rates of end-user agreement relate to EMR visual clarity (3.65 ± 0.61), fault tolerance (3.58 ± 0.56), and suitability for learning (3.55 ± 0.54). The lowest end-user agreement was for auditory presentation (3.18 ± 0.69). The highest and lowest agreement among end-users was for visual clarity and auditory presentation by EMRs, respectively. This suggests that user priorities in determination of EMR usability and their understanding of the importance of the types of individual tasks and context characteristics differ.
Towards a visual modeling approach to designing microelectromechanical system transducers
NASA Astrophysics Data System (ADS)
Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim
1999-12-01
In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).
The Task-Relevant Attribute Representation Can Mediate the Simon Effect
Chen, Antao
2014-01-01
Researchers have previously suggested a working memory (WM) account of spatial codes, and based on this suggestion, the present study carries out three experiments to investigate how the task-relevant attribute representation (verbal or visual) in the typical Simon task affects the Simon effect. Experiment 1 compared the Simon effect between the between- and within-category color conditions, which required subjects to discriminate between red and blue stimuli (presumed to be represented by verbal WM codes because it was easy and fast to name the colors verbally) and to discriminate between two similar green stimuli (presumed to be represented by visual WM codes because it was hard and time-consuming to name the colors verbally), respectively. The results revealed a reliable Simon effect that only occurs in the between-category condition. Experiment 2 assessed the Simon effect by requiring subjects to discriminate between two different isosceles trapezoids (within-category shapes) and to discriminate isosceles trapezoid from rectangle (between-category shapes), and the results replicated and expanded the findings of Experiment 1. In Experiment 3, subjects were required to perform both tasks from Experiment 1. Wherein, in Experiment 3A, the between-category task preceded the within-category task; in Experiment 3B, the task order was opposite. The results showed the reliable Simon effect when subjects represented the task-relevant stimulus attributes by verbal WM encoding. In addition, the response times (RTs) distribution analysis for both the between- and within-category conditions of Experiments 3A and 3B showed decreased Simon effect with the RTs lengthened. Altogether, although the present results are consistent with the temporal coding account, we put forth that the Simon effect also depends on the verbal WM representation of task-relevant stimulus attribute. PMID:24618692
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
NASA Technical Reports Server (NTRS)
Baron, S.; Lancraft, R.; Zacharias, G.
1980-01-01
The optimal control model (OCM) of the human operator is used to predict the effect of simulator characteristics on pilot performance and workload. The piloting task studied is helicopter hover. Among the simulator characteristics considered were (computer generated) visual display resolution, field of view and time delay.
Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome
ERIC Educational Resources Information Center
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A.; Mottron, Laurent
2010-01-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis.…
Heinz, Andrew J; Johnson, Jeffrey S
2017-01-01
Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band power (ABP) during the delay period of verbal and visual working memory (VWM) tasks. There have been various proposals regarding the functional significance of such increases, including the inhibition of task-irrelevant cortical areas as well as the active retention of information in VWM. The present study examines the role of delay-period ABP in mediating the effects of interference arising from on-going visual processing during a concurrent VWM task. Specifically, we reasoned that, if set-size dependent increases in ABP represent the gating out of on-going task-irrelevant visual inputs, they should be predictive with respect to some modulation in visual evoked potentials resulting from a task-irrelevant delay period probe stimulus. In order to investigate this possibility, we recorded the electroencephalogram while subjects performed a change detection task requiring the retention of two or four novel shapes. On a portion of trials, a novel, task-irrelevant bilateral checkerboard probe was presented mid-way through the delay. Analyses focused on examining correlations between set-size dependent increases in ABP and changes in the magnitude of the P1, N1 and P3a components of the probe-evoked response and how such increases might be related to behavior. Results revealed that increased delay-period ABP was associated with changes in the amplitude of the N1 and P3a event-related potential (ERP) components, and with load-dependent changes in capacity when the probe was presented during the delay. We conclude that load-dependent increases in ABP likely play a role in supporting short-term retention by gating task-irrelevant sensory inputs and suppressing potential sources of disruptive interference.
Heinz, Andrew J.; Johnson, Jeffrey S.
2017-01-01
Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band power (ABP) during the delay period of verbal and visual working memory (VWM) tasks. There have been various proposals regarding the functional significance of such increases, including the inhibition of task-irrelevant cortical areas as well as the active retention of information in VWM. The present study examines the role of delay-period ABP in mediating the effects of interference arising from on-going visual processing during a concurrent VWM task. Specifically, we reasoned that, if set-size dependent increases in ABP represent the gating out of on-going task-irrelevant visual inputs, they should be predictive with respect to some modulation in visual evoked potentials resulting from a task-irrelevant delay period probe stimulus. In order to investigate this possibility, we recorded the electroencephalogram while subjects performed a change detection task requiring the retention of two or four novel shapes. On a portion of trials, a novel, task-irrelevant bilateral checkerboard probe was presented mid-way through the delay. Analyses focused on examining correlations between set-size dependent increases in ABP and changes in the magnitude of the P1, N1 and P3a components of the probe-evoked response and how such increases might be related to behavior. Results revealed that increased delay-period ABP was associated with changes in the amplitude of the N1 and P3a event-related potential (ERP) components, and with load-dependent changes in capacity when the probe was presented during the delay. We conclude that load-dependent increases in ABP likely play a role in supporting short-term retention by gating task-irrelevant sensory inputs and suppressing potential sources of disruptive interference. PMID:28555099
Janczyk, Markus; Berryhill, Marian E
2014-04-01
The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dual-task conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring a manual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required.
Berryhill, Marian E.
2014-01-01
The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dual-task conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring amanual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required. PMID:24452383
Timing of saccadic eye movements during visual search for multiple targets
Wu, Chia-Chien; Kowler, Eileen
2013-01-01
Visual search requires sequences of saccades. Many studies have focused on spatial aspects of saccadic decisions, while relatively few (e.g., Hooge & Erkelens, 1999) consider timing. We studied saccadic timing during search for targets (thin circles containing tilted lines) located among nontargets (thicker circles). Tasks required either (a) estimating the mean tilt of the lines, or (b) looking at targets without a concurrent psychophysical task. The visual similarity of targets and nontargets affected both the probability of hitting a target and the saccade rate in both tasks. Saccadic timing also depended on immediate conditions, specifically, (a) the type of currently fixated location (dwell time was longer on targets than nontargets), (b) the type of goal (dwell time was shorter prior to saccades that hit targets), and (c) the ordinal position of the saccade in the sequence. The results show that timing decisions take into account the difficulty of finding targets, as well as the cost of delays. Timing strategies may be a compromise between the attempt to find and locate targets, or other suitable landing locations, using eccentric vision (at the cost of increased dwell times) versus a strategy of exploring less selectively at a rapid rate. PMID:24049045
Brébion, Gildas; Bressan, Rodrigo A; Pilowsky, Lyn S; David, Anthony S
2011-05-01
Previous work has suggested that decrement in both processing speed and working memory span plays a role in the memory impairment observed in patients with schizophrenia. We undertook a study to examine simultaneously the effect of these two factors. A sample of 49 patients with schizophrenia and 43 healthy controls underwent a battery of verbal and visual memory tasks. Superficial and deep encoding memory measures were tallied. We conducted regression analyses on the various memory measures, using processing speed and working memory span as independent variables. In the patient group, processing speed was a significant predictor of superficial and deep memory measures in verbal and visual memory. Working memory span was an additional significant predictor of the deep memory measures only. Regression analyses involving all participants revealed that the effect of diagnosis on all the deep encoding memory measures was reduced to non-significance when processing speed was entered in the regression. Decreased processing speed is involved in verbal and visual memory deficit in patients, whether the task require superficial or deep encoding. Working memory is involved only insofar as the task requires a certain amount of effort.
Implications of differences of echoic and iconic memory for the design of multimodal displays
NASA Astrophysics Data System (ADS)
Glaser, Daniel Shields
It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.
Event-Related fMRI of Category Learning: Differences in Classification and Feedback Networks
ERIC Educational Resources Information Center
Little, Deborah M.; Shin, Silvia S.; Sisco, Shannon M.; Thulborn, Keith R.
2006-01-01
Eighteen healthy young adults underwent event-related (ER) functional magnetic resonance imaging (fMRI) of the brain while performing a visual category learning task. The specific category learning task required subjects to extract the rules that guide classification of quasi-random patterns of dots into categories. Following each classification…
Effects of Hearing Status and Sign Language Use on Working Memory
ERIC Educational Resources Information Center
Marschark, Marc; Sarchet, Thomastine; Trani, Alexandra
2016-01-01
Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those requiring retention of serial order. Deaf individuals who are native signers, meanwhile, have been found to score higher on visual-spatial memory tasks than on verbal-sequential…
Sex Discrimination and Cerebral Bias: Implications for the Reading Curriculum.
ERIC Educational Resources Information Center
Keenan, Donna; Smith, Michael
1983-01-01
Reviews research supporting the concept that girls usually outperform boys on tasks requiring verbal skills and that boys outperform girls on tasks using visual and spatial skills. Offers an explanation for this situation based on left brain/right brain research. Concludes that the curriculum in American schools is clearly left-brain biased. (FL)
Visual search deficits in amblyopia.
Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F
2018-04-01
Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.
Influence of Coactors on Saccadic and Manual Responses
Niehorster, Diederick C.; Jarodzka, Halszka; Holmqvist, Kenneth
2017-01-01
Two experiments were conducted to investigate the effects of coaction on saccadic and manual responses. Participants performed the experiments either in a solitary condition or in a group of coactors who performed the same tasks at the same time. In Experiment 1, participants completed a pro- and antisaccade task where they were required to make saccades towards (prosaccades) or away (antisaccades) from a peripheral visual stimulus. In Experiment 2, participants performed a visual discrimination task that required both making a saccade towards a peripheral stimulus and making a manual response in reaction to the stimulus’s orientation. The results showed that performance of stimulus-driven responses was independent of the social context, while volitionally controlled responses were delayed by the presence of coactors. These findings are in line with studies assessing the effect of attentional load on saccadic control during dual-task paradigms. In particular, antisaccades – but not prosaccades – were influenced by the type of social context. Additionally, the number of coactors present in the group had a moderating effect on both saccadic and manual responses. The results support an attentional view of social influences. PMID:28321288
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.
This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics
The contributions of visual and central attention to visual working memory.
Souza, Alessandra S; Oberauer, Klaus
2017-10-01
We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
Visual function and fitness to drive.
Kotecha, Aachal; Spratt, Alexander; Viswanathan, Ananth
2008-01-01
Driving is recognized to be a visually intensive task and accordingly there is a legal minimum standard of vision required for all motorists. The purpose of this paper is to review the current United Kingdom (UK) visual requirements for driving and discuss the evidence base behind these legal rules. The role of newer, alternative tests of visual function that may be better indicators of driving safety will also be considered. Finally, the implications of ageing on driving ability are discussed. A search of Medline and PubMed databases was performed using the following keywords: driving, vision, visual function, fitness to drive and ageing. In addition, papers from the Department of Transport website and UK Royal College of Ophthalmologists guidelines were studied. Current UK visual standards for driving are based upon historical concepts, but recent advances in technology have brought about more sophisticated methods for assessing the status of the binocular visual field and examining visual attention. These tests appear to be better predictors of driving performance. Further work is required to establish whether these newer tests should be incorporated in the current UK visual standards when examining an individual's fitness to drive.
Interference effects of vocalization on dual task performance
NASA Astrophysics Data System (ADS)
Owens, J. M.; Goodman, L. S.; Pianka, M. J.
1984-09-01
Voice command and control systems have been proposed as a potential means of off-loading the typically overburdened visual information processing system. However, prior to introducing novel human-machine interfacing technologies in high workload environments, consideration must be given to the integration of the new technologists within existing task structures to ensure that no new sources of workload or interference are systematically introduced. This study examined the use of voice interactive systems technology in the joint performance of two cognitive information processing tasks requiring continuous memory and choice reaction wherein a basis for intertask interference might be expected. Stimuli for the continuous memory task were presented aurally and either voice or keyboard responding was required in the choice reaction task. Performance was significantly degraded in each task when voice responding was required in the choice reaction time task. Performance degradation was evident in higher error scores for both the choice reaction and continuous memory tasks. Performance decrements observed under conditions of high intertask stimulus similarity were not statistically significant. The results signal the need to consider further the task requirements for verbal short-term memory when applying speech technology in multitask environments.
Baugh, Lee A; Lawrence, Jane M; Marotta, Jonathan J
2011-10-01
Previous literature has reported a wide range of anatomical correlates when participants are required to perform a visuomotor adaptation task. However, traditional adaptation tasks suffer a number of inherent limitations that may, in part, give rise to this variability. For instance, the sparse visual environment does not map well onto conditions in which a visuomotor transformation would normally be required in everyday life. To further clarify these neural underpinnings, functional magnetic resonance imaging (fMRI) was performed on 17 (6M, age range 20-45 years old; mean age=26) naive participants performing a viewing window task in which a visuomotor transformation was created by varying the relationship between the participant's movement and the resultant movement of the viewing window. The viewing window task more naturally replicates scenarios in which haptic and visual information would be combined to achieve a higher-level goal. Even though activity related to visuomotor adaptation was found within previously reported regions of the parietal lobes, frontal lobes, and occipital lobes, novel activation patterns were observed within the claustrum - a region well-established as multi-modal convergence zone. These results confirm the diversity in the number and location of neurological systems recruited to perform a required visuomotor adaptation, and provide the first evidence of participation of the claustrum to overcome a visuomotor transformation. Copyright © 2011 Elsevier B.V. All rights reserved.
Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P
2018-01-01
Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.
Methodology development for evaluation of selective-fidelity rotorcraft simulation
NASA Technical Reports Server (NTRS)
Lewis, William D.; Schrage, D. P.; Prasad, J. V. R.; Wolfe, Daniel
1992-01-01
This paper addressed the initial step toward the goal of establishing performance and handling qualities acceptance criteria for realtime rotorcraft simulators through a planned research effort to quantify the system capabilities of 'selective fidelity' simulators. Within this framework the simulator is then classified based on the required task. The simulator is evaluated by separating the various subsystems (visual, motion, etc.) and applying corresponding fidelity constants based on the specific task. This methodology not only provides an assessment technique, but also provides a technique to determine the required levels of subsystem fidelity for a specific task.
Memory-based attention capture when multiple items are maintained in visual working memory.
Hollingworth, Andrew; Beck, Valerie M
2016-07-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search, an index of VWM guidance, is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when 2 colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn
2005-10-01
Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.
Kornrumpf, Benthe; Sommer, Werner
2015-09-01
Due to capacity limitation, visual attention must be focused to a limited region of the visual field. Nevertheless, it is assumed that the size of that region may vary with task demands. We aimed to obtain direct evidence for the modulation of visuospatial attention as a function of foveal and parafoveal task load. Participants were required to fixate the center word of word triplets. In separate task blocks, either just the fixated word or both the fixated and the parafoveal word to the right should be semantically classified. The spatiotemporal distribution of attention was assessed with task-irrelevant probes flashed briefly at center or parafoveal positions, during or in between word presentation trials. The N1 component of the ERP elicited by intertrial probes at possible target positions increased with task demands within a block. These results suggest the recruitment of additional attentional resources rather than a redistribution of a fixed resource pool, which persists across trials. © 2015 Society for Psychophysiological Research.
It's about time: revisiting temporal processing deficits in dyslexia.
Casini, Laurence; Pech-Georgel, Catherine; Ziegler, Johannes C
2018-03-01
Temporal processing in French children with dyslexia was evaluated in three tasks: a word identification task requiring implicit temporal processing, and two explicit temporal bisection tasks, one in the auditory and one in the visual modality. Normally developing children matched on chronological age and reading level served as a control group. Children with dyslexia exhibited robust deficits in temporal tasks whether they were explicit or implicit and whether they involved the auditory or the visual modality. First, they presented larger perceptual variability when performing temporal tasks, whereas they showed no such difficulties when performing the same task on a non-temporal dimension (intensity). This dissociation suggests that their difficulties were specific to temporal processing and could not be attributed to lapses of attention, reduced alertness, faulty anchoring, or overall noisy processing. In the framework of cognitive models of time perception, these data point to a dysfunction of the 'internal clock' of dyslexic children. These results are broadly compatible with the recent temporal sampling theory of dyslexia. © 2017 John Wiley & Sons Ltd.
Pomplun, M; Reingold, E M; Shen, J
2001-09-01
In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.
Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.
Laha, Bireswar; Bowman, Doug A; Socha, John J
2014-04-01
Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.
Functional size of human visual area V1: a neural correlate of top-down attention.
Verghese, Ashika; Kolbe, Scott C; Anderson, Andrew J; Egan, Gary F; Vidyasagar, Trichur R
2014-06-01
Heavy demands are placed on the brain's attentional capacity when selecting a target item in a cluttered visual scene, or when reading. It is widely accepted that such attentional selection is mediated by top-down signals from higher cortical areas to early visual areas such as the primary visual cortex (V1). Further, it has also been reported that there is considerable variation in the surface area of V1. This variation may impact on either the number or specificity of attentional feedback signals and, thereby, the efficiency of attentional mechanisms. In this study, we investigated whether individual differences between humans performing attention-demanding tasks can be related to the functional area of V1. We found that those with a larger representation in V1 of the central 12° of the visual field as measured using BOLD signals from fMRI were able to perform a serial search task at a faster rate. In line with recent suggestions of the vital role of visuo-spatial attention in reading, the speed of reading showed a strong positive correlation with the speed of visual search, although it showed little correlation with the size of V1. The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections. Copyright © 2014 Elsevier Inc. All rights reserved.
Advanced automation for in-space vehicle processing
NASA Technical Reports Server (NTRS)
Sklar, Michael; Wegerif, D.
1990-01-01
The primary objective of this 3-year planned study is to assure that the fully evolved Space Station Freedom (SSF) can support automated processing of exploratory mission vehicles. Current study assessments show that required extravehicular activity (EVA) and to some extent intravehicular activity (IVA) manpower requirements for required processing tasks far exceeds the available manpower. Furthermore, many processing tasks are either hazardous operations or they exceed EVA capability. Thus, automation is essential for SSF transportation node functionality. Here, advanced automation represents the replacement of human performed tasks beyond the planned baseline automated tasks. Both physical tasks such as manipulation, assembly and actuation, and cognitive tasks such as visual inspection, monitoring and diagnosis, and task planning are considered. During this first year of activity both the Phobos/Gateway Mars Expedition and Lunar Evolution missions proposed by the Office of Exploration have been evaluated. A methodology for choosing optimal tasks to be automated has been developed. Processing tasks for both missions have been ranked on the basis of automation potential. The underlying concept in evaluating and describing processing tasks has been the use of a common set of 'Primitive' task descriptions. Primitive or standard tasks have been developed both for manual or crew processing and automated machine processing.
Alvarez, George A; Gill, Jonathan; Cavanagh, Patrick
2012-01-01
Previous studies have shown independent attentional selection of targets in the left and right visual hemifields during attentional tracking (Alvarez & Cavanagh, 2005) but not during a visual search (Luck, Hillyard, Mangun, & Gazzaniga, 1989). Here we tested whether multifocal spatial attention is the critical process that operates independently in the two hemifields. It is explicitly required in tracking (attend to a subset of object locations, suppress the others) but not in the standard visual search task (where all items are potential targets). We used a modified visual search task in which observers searched for a target within a subset of display items, where the subset was selected based on location (Experiments 1 and 3A) or based on a salient feature difference (Experiments 2 and 3B). The results show hemifield independence in this subset visual search task with location-based selection but not with feature-based selection; this effect cannot be explained by general difficulty (Experiment 4). Combined, these findings suggest that hemifield independence is a signature of multifocal spatial attention and highlight the need for cognitive and neural theories of attention to account for anatomical constraints on selection mechanisms. PMID:22637710
Poon, Cynthia; Chin-Cottongim, Lisa G.; Coombes, Stephen A.; Corcos, Daniel M.
2012-01-01
It is well established that the prefrontal cortex is involved during memory-guided tasks whereas visually guided tasks are controlled in part by a frontal-parietal network. However, the nature of the transition from visually guided to memory-guided force control is not as well established. As such, this study examines the spatiotemporal pattern of brain activity that occurs during the transition from visually guided to memory-guided force control. We measured 128-channel scalp electroencephalography (EEG) in healthy individuals while they performed a grip force task. After visual feedback was removed, the first significant change in event-related activity occurred in the left central region by 300 ms, followed by changes in prefrontal cortex by 400 ms. Low-resolution electromagnetic tomography (LORETA) was used to localize the strongest activity to the left ventral premotor cortex and ventral prefrontal cortex. A second experiment altered visual feedback gain but did not require memory. In contrast to memory-guided force control, altering visual feedback gain did not lead to early changes in the left central and midline prefrontal regions. Decreasing the spatial amplitude of visual feedback did lead to changes in the midline central region by 300 ms, followed by changes in occipital activity by 400 ms. The findings show that subjects rely on sensorimotor memory processes involving left ventral premotor cortex and ventral prefrontal cortex after the immediate transition from visually guided to memory-guided force control. PMID:22696535
Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.
Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D
2011-10-30
Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.
Similarity relations in visual search predict rapid visual categorization
Mohan, Krithika; Arun, S. P.
2012-01-01
How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947
Visual detail about the body modulates tactile localisation biases.
Margolis, Aaron N; Longo, Matthew R
2015-02-01
The localisation of tactile stimuli requires the integration of visual and somatosensory inputs within an internal representation of the body surface and is prone to consistent bias. Joints may play a role in segmenting such internal body representations, and may therefore influence tactile localisation biases, although the nature of this influence remains unclear. Here, we investigate the relationship between conceptual knowledge of joint locations and tactile localisation biases on the hand. In one task, participants localised tactile stimuli applied to the dorsum of their hand. A distal localisation bias was observed in all participants, consistent with previous results. We also manipulated the availability of visual information during this task, to determine whether the absence of this information could account for the distal bias observed here and by Mancini et al. (Neuropsychologia 49:1194-1201, 2011). The observed distal bias increased in magnitude when visual information was restricted, without a corresponding decrease in precision. In a separate task, the same participants indicated, from memory, knuckle locations on a silhouette image of their hand. Analogous distal biases were also seen in the knuckle localisation task. The accuracy of conceptual joint knowledge was not correlated with tactile localisation bias magnitude, although a similarity in observed bias direction suggests that both tasks may rely on a common, higher-order body representation. These results also suggest that distortions of conceptual body representation may be more common in healthy individuals than previously thought.
Naber, Marnix; Vedder, Anneke; Brown, Stephen B R E; Nieuwenhuis, Sander
2016-01-01
The Stroop task is a popular neuropsychological test that measures executive control. Strong Stroop interference is commonly interpreted in neuropsychology as a diagnostic marker of impairment in executive control, possibly reflecting executive dysfunction. However, popular models of the Stroop task indicate that several other aspects of color and word processing may also account for individual differences in the Stroop task, independent of executive control. Here we use new approaches to investigate the degree to which individual differences in Stroop interference correlate with the relative processing speed of word and color stimuli, and the lateral inhibition between visual stimuli. We conducted an electrophysiological and behavioral experiment to measure (1) how quickly an individual's brain processes words and colors presented in isolation (P3 latency), and (2) the strength of an individual's lateral inhibition between visual representations with a visual illusion. Both measures explained at least 40% of the variance in Stroop interference across individuals. As these measures were obtained in contexts not requiring any executive control, we conclude that the Stroop effect also measures an individual's pre-set way of processing visual features such as words and colors. This study highlights the important contributions of stimulus processing speed and lateral inhibition to individual differences in Stroop interference, and challenges the general view that the Stroop task primarily assesses executive control.
Srivastava, Nishant R; Troyk, Philip R; Dagnelie, Gislin
2014-01-01
In order to assess visual performance using a future cortical prosthesis device, the ability of normally sighted and low vision subjects to adapt to a dotted ‘phosphene’ image was studied. Similar studies have been conduced in the past and adaptation to phosphene maps has been shown but the phosphene maps used have been square or hexagonal in pattern. The phosphene map implemented for this testing is what is expected from a cortical implantation of the arrays of intracortical electrodes, generating multiple phosphenes. The dotted image created depends upon the surgical location of electrodes decided for implantation and the expected cortical response. The subjects under tests were required to perform tasks requiring visual inspection, eye–hand coordination and way finding. The subjects did not have any tactile feedback and the visual information provided was live dotted images captured by a camera on a head-mounted low vision enhancing system and processed through a filter generating images similar to the images we expect the blind persons to perceive. The images were locked to the subject’s gaze by means of video-based pupil tracking. In the detection and visual inspection task, the subject scanned a modified checkerboard and counted the number of square white fields on a square checkerboard, in the eye–hand coordination task, the subject placed black checkers on the white fields of the checkerboard, and in the way-finding task, the subjects maneuvered themselves through a virtual maze using a game controller. The accuracy and the time to complete the task were used as the measured outcome. As per the surgical studies by this research group, it might be possible to implant up to 650 electrodes; hence, 650 dots were used to create images and performance studied under 0% dropout (650 dots), 25% dropout (488 dots) and 50% dropout (325 dots) conditions. It was observed that all the subjects under test were able to learn the given tasks and showed improvement in performance with practice even with a dropout condition of 50% (325 dots). Hence, if a cortical prosthesis is implanted in human subjects, they might be able to perform similar tasks and with practice should be able to adapt to dotted images even with a low resolution of 325 dots of phosphene. PMID:19458397
Dye, Matthew W. G.; Hauser, Peter C.; Bavelier, Daphne
2009-01-01
Background Early deafness leads to enhanced attention in the visual periphery. Yet, whether this enhancement confers advantages in everyday life remains unknown, as deaf individuals have been shown to be more distracted by irrelevant information in the periphery than their hearing peers. Here, we show that, in a complex attentional task, a performance advantage results for deaf individuals. Methodology/Principal Findings We employed the Useful Field of View (UFOV) which requires central target identification concurrent with peripheral target localization in the presence of distractors – a divided, selective attention task. First, the comparison of deaf and hearing adults with or without sign language skills establishes that deafness and not sign language use drives UFOV enhancement. Second, UFOV performance was enhanced in deaf children, but only after 11 years of age. Conclusions/Significance This work demonstrates that, following early auditory deprivation, visual attention resources toward the periphery slowly get augmented to eventually result in a clear behavioral advantage by pre-adolescence on a selective visual attention task. PMID:19462009
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-08-04
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution.
Active sensing in the categorization of visual patterns
Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M
2016-01-01
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546
Visual-Attentional Span and Lexical Decision in Skilled Adult Readers
ERIC Educational Resources Information Center
Holmes, Virginia M.; Dawson, Georgia
2014-01-01
The goal of the study was to examine the association between visual-attentional span and lexical decision in skilled adult readers. In the span tasks, an array of letters was presented briefly and recognition or production of a single cued letter (partial span) or production of all letters (whole span) was required. Independently of letter…
USDA-ARS?s Scientific Manuscript database
The 5,000 arthropod genomes initiative (i5k) has tasked itself with coordinating the sequencing of 5,000 insect or related arthropod genomes. The resulting influx of data, mostly from small research groups or communities with little bioinformatics experience, will require visualization, disseminatio...
ERIC Educational Resources Information Center
Stenneken, Prisca; Egetemeir, Johanna; Schulte-Korne, Gerd; Muller, Hermann J.; Schneider, Werner X.; Finke, Kathrin
2011-01-01
The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these…
Takahama, Sachiko; Saiki, Jun
2014-01-01
Information on an object's features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location) with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC), hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS), DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010) demonstrated that the superior parietal lobule (SPL) cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding. PMID:24917833
Takahama, Sachiko; Saiki, Jun
2014-01-01
Information on an object's features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location) with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC), hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS), DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010) demonstrated that the superior parietal lobule (SPL) cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding.
Decision Making in Concurrent Multitasking: Do People Adapt to Task Interference?
Nijboer, Menno; Taatgen, Niels A.; Brands, Annelies; Borst, Jelmer P.; van Rijn, Hedderik
2013-01-01
While multitasking has received a great deal of attention from researchers, we still know little about how well people adapt their behavior to multitasking demands. In three experiments, participants were presented with a multicolumn subtraction task, which required working memory in half of the trials. This primary task had to be combined with a secondary task requiring either working memory or visual attention, resulting in different types of interference. Before each trial, participants were asked to choose which secondary task they wanted to perform concurrently with the primary task. We predicted that if people seek to maximize performance or minimize effort required to perform the dual task, they choose task combinations that minimize interference. While performance data showed that the predicted optimal task combinations indeed resulted in minimal interference between tasks, the preferential choice data showed that a third of participants did not show any adaptation, and for the remainder it took a considerable number of trials before the optimal task combinations were chosen consistently. On the basis of these results we argue that, while in principle people are able to adapt their behavior according to multitasking demands, selection of the most efficient combination of strategies is not an automatic process. PMID:24244527
ERIC Educational Resources Information Center
Fiori, Marina; Antonakis, John
2012-01-01
We examined how general intelligence, personality, and emotional intelligence--measured as an ability using the MSCEIT--predicted performance on a selective-attention task requiring participants to ignore distracting emotion information. We used a visual prime in which participants saw a pair of faces depicting emotions; their task was to focus on…
ERIC Educational Resources Information Center
Meier, Matt E.; Smeekens, Bridget A.; Silvia, Paul J.; Kwapil, Thomas R.; Kane, Michael J.
2018-01-01
The association between working memory capacity (WMC) and the antisaccade task, which requires subjects to move their eyes and attention away from a strong visual cue, supports the claim that WMC is partially an attentional construct (Kane, Bleckley, Conway, & Engle, 2001; Unsworth, Schrock, & Engle, 2004). Specifically, the…
Fitts' Law in the Control of Isometric Grip Force With Naturalistic Targets.
Thumser, Zachary C; Slifkin, Andrew B; Beckler, Dylan T; Marasco, Paul D
2018-01-01
Fitts' law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts' law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts' law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts' law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts' law (average r 2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts' law for explicit targets with vision ( r 2 = 0.96) and implicit targets ( r 2 = 0.89), but not as well-described for explicit targets without vision ( r 2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts' law to quantify the relative speed-accuracy relationship of any given grasper.
Eye movements and postural control in dyslexic children performing different visual tasks.
Razuk, Milena; Barela, José Angelo; Peyre, Hugo; Gerard, Christophe Loic; Bucci, Maria Pia
2018-01-01
The aim of this study was to examine eye movements and postural control performance among dyslexic children while reading a text and performing the Landolt reading task. Fifteen dyslexic and 15 non-dyslexic children were asked to stand upright while performing two experimental visual tasks: text reading and Landolt reading. In the text reading task, children were asked to silently read a text displayed on a monitor, while in the Landolt reading task, the letters in the text were replaced by closed circles and Landolt rings, and children were asked to scan each circle/ring in a reading-like fashion, from left to right, and to count the number of Landolt rings. Eye movements (Mobile T2®, SuriCog) and center of pressure excursions (Framiral®, Grasse, France) were recorded. Visual performance variables were total reading time, mean duration of fixation, number of pro- and retro-saccades, and amplitude of pro-saccades. Postural performance variable was the center of pressure area. The results showed that dyslexic children spent more time reading the text and had a longer duration of fixation than non-dyslexic children. However, no difference was observed between dyslexic and non-dyslexic children in the Landolt reading task. Dyslexic children performed a higher number of pro- and retro-saccades than non-dyslexic children in both text reading and Landolt reading tasks. Dyslexic children had smaller pro-saccade amplitude than non-dyslexic children in the text reading task. Finally, postural performance was poorer in dyslexic children than in non-dyslexic children. Reading difficulties in dyslexic children are related to eye movement strategies required to scan and obtain lexical and semantic meaning. However, postural control performance, which was poor in dyslexic children, is not related to lexical and semantic reading requirements and might not also be related to different eye movement behavior.
Conflict resolved: On the role of spatial attention in reading and color naming tasks.
Robidoux, Serje; Besner, Derek
2015-12-01
The debate about whether or not visual word recognition requires spatial attention has been marked by a conflict: the results from different tasks yield different conclusions. Experiments in which the primary task is reading based show no evidence that unattended words are processed, whereas when the primary task is color identification, supposedly unattended words do affect processing. However, the color stimuli used to date does not appear to demand as much spatial attention as explicit word reading tasks. We first identify a color stimulus that requires as much spatial attention to identify as does a word. We then demonstrate that when spatial attention is appropriately captured, distractor words in unattended locations do not affect color identification. We conclude that there is no word identification without spatial attention.
Pasqualotti, Léa; Baccino, Thierry
2014-01-01
Most of studies about online advertisements have indicated that they have a negative impact on users' cognitive processes, especially when they include colorful or animated banners and when they are close to the text to be read. In the present study we assessed the effects of two advertisements features-distance from the text and the animation-on visual strategies during a word-search task and a reading-for-comprehension task using Web-like pages. We hypothesized that the closer the advertisement was to the target text, the more cognitive processing difficulties it would cause. We also hypothesized that (1) animated banners would be more disruptive than static advertisements and (2) banners would have more effect on word-search performance than reading-for-comprehension performance. We used an automatic classifier to assess variations in use of Scanning and Reading visual strategies during task performance. The results showed that the effect of dynamic and static advertisements on visual strategies varies according to the task. Fixation duration indicated that the closest advertisements slowed down information processing but there was no difference between the intermediate (40 pixel) and far (80 pixel) distance conditions. Our findings suggest that advertisements have a negative impact on users' performance mostly when a lots of cognitive resources are required as for reading-for-comprehension.
Changes in otoacoustic emissions during selective auditory and visual attention
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2015-01-01
Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703
Swallow, Khena M; Jiang, Yuhong V
2010-04-01
Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). Copyright 2009 Elsevier B.V. All rights reserved.
Swallow, Khena M.; Jiang, Yuhong V.
2009-01-01
Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). PMID:20080232
Ocular dynamics and visual tracking performance after Q-switched laser exposure
NASA Astrophysics Data System (ADS)
Zwick, Harry; Stuck, Bruce E.; Lund, David J.; Nawim, Maqsood
2001-05-01
In previous investigations of q-switched laser retinal exposure in awake task oriented non-human primates (NHPs), the threshold for retinal damage occurred well below that of the threshold for permanent visual function loss. Visual function measures used in these studies involved measures of visual acuity and contrast sensitivity. In the present study, we examine the same relationship for q-switched laser exposure using a visual performance task, where task dependency involves more parafoveal than foveal retina. NHPs were trained on a visual pursuit motor tracking performance task that required maintaining a small HeNe laser spot (0.3 degrees) centered in a slowly moving (0.5deg/sec) annulus. When NHPs reliably produced visual target tracking efficiencies > 80%, single q-switched laser exposures (7 nsec) were made coaxially with the line of sight of the moving target. An infrared camera imaged the pupil during exposure to obtain the pupillary response to the laser flash. Retinal images were obtained with a scanning laser ophthalmoscope 3 days post exposure under ketamine and nembutol anesthesia. Q-switched visible laser exposures at twice the damage threshold produced small (about 50mm) retinal lesions temporal to the fovea; deficits in NHP visual pursuit tracking were transient, demonstrating full recovery to baseline within a single tracking session. Post exposure analysis of the pupillary response demonstrated that the exposure flash entered the pupil, followed by 90 msec refractory period and than a 12 % pupillary contraction within 1.5 sec from the onset of laser exposure. At 6 times the morphological threshold damage level for 532 nm q-switched exposure, longer term losses in NHP pursuit tracking performance were observed. In summary, q-switched laser exposure appears to have a higher threshold for permanent visual performance loss than the corresponding threshold to produce retinal threshold injury. Mechanisms of neural plasticity within the retina and at higher visual brain centers may mediat
Individual differences in working memory capacity and workload capacity.
Yu, Ju-Chi; Chang, Ting-Yun; Yang, Cheng-Ta
2014-01-01
We investigated the relationship between working memory capacity (WMC) and workload capacity (WLC). Each participant performed an operation span (OSPAN) task to measure his/her WMC and three redundant-target detection tasks to measure his/her WLC. WLC was computed non-parametrically (Experiments 1 and 2) and parametrically (Experiment 2). Both levels of analyses showed that participants high in WMC had larger WLC than those low in WMC only when redundant information came from visual and auditory modalities, suggesting that high-WMC participants had superior processing capacity in dealing with redundant visual and auditory information. This difference was eliminated when multiple processes required processing for only a single working memory subsystem in a color-shape detection task and a double-dot detection task. These results highlighted the role of executive control in integrating and binding information from the two working memory subsystems for perceptual decision making.
Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon
2011-01-01
Previous research has shown that prism adaptation (prism adaptation) can ameliorate several symptoms of spatial neglect after right-hemisphere damage. But the mechanisms behind this remain unclear. Recently we reported that prisms may increase leftward awareness for neglect in a task using chimeric visual objects, despite apparently not affecting awareness in a task using chimeric emotional faces (Sarri et al., 2006). Here we explored potential reasons for this apparent discrepancy in outcome, by testing further whether the lack of a prism effect on the chimeric face task task could be explained by: i) the specific category of stimuli used (faces as opposed to objects); ii) the affective nature of the stimuli; and/or iii) the particular task implemented, with the chimeric face task requiring forced-choice judgements of lateral ‘preference’ between pairs of identical, but left/right mirror-reversed chimeric face tasks (as opposed to identification for the chimeric object task). We replicated our previous pattern of no impact of prisms on the emotional chimeric face task here in a new series of patients, while also similarly finding no beneficial impact on another lateral ‘preference’ measure that used non-face non-emotional stimuli, namely greyscale gradients. By contrast, we found the usual beneficial impact of prism adaptation (prism adaptation) on some conventional measures of neglect, and improvements for at least some patients in a different face task, requiring explicit discrimination of the chimeric or non-chimeric nature of face stimuli. The new findings indicate that prism therapy does not alter spatial biases in neglect as revealed by ‘lateral preference tasks’ that have no right or wrong answer (requiring forced-choice judgements on left/right mirror-reversed stimuli), regardless of whether these employ face or non-face stimuli. But our data also show that prism therapy can beneficially modulate some aspects of visual awareness in spatial neglect not only for objects, but also for face stimuli, in some cases. PMID:20171612
NASA Technical Reports Server (NTRS)
Remington, Roger; Williams, Douglas
1986-01-01
Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.
A top-down manner-based DCNN architecture for semantic image segmentation.
Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin
2017-01-01
Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.
Deployment of spatial attention towards locations in memory representations. An EEG study.
Leszczyński, Marcin; Wykowska, Agnieszka; Perez-Osorio, Jairo; Müller, Hermann J
2013-01-01
Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.
Latency in Visionic Systems: Test Methods and Requirements
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.
2005-01-01
A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.
Louridas, Marisa; Quinn, Lauren E; Grantcharov, Teodor P
2016-03-01
Emerging evidence suggests that despite dedicated practice, not all surgical trainees have the ability to reach technical competency in minimally invasive techniques. While selecting residents that have the ability to reach technical competence is important, evidence to guide the incorporation of technical ability into selection processes is limited. Therefore, the purpose of the present study was to evaluate whether background experiences and 2D-3D visual spatial test results are predictive of baseline laparoscopic skill for the novice surgical trainee. First-year residents were studied. Demographic data and background surgical and non-surgical experiences were obtained using a questionnaire. Visual spatial ability was evaluated using the PicSOr, cube comparison (CC) and card rotation (CR) tests. Technical skill was assessed using the camera navigation (LCN) task and laparoscopic circle cut (LCC) task. Resident performance on these technical tasks was compared and correlated with the questionnaire and visual spatial findings. Previous experience in observing laparoscopic procedures was associated with significantly better LCN performance, and experience in navigating the laparoscopic camera was associated with significantly better LCC task results. Residents who scored higher on the CC test demonstrated a more accurate LCN path length score (r s(PL) = -0.36, p = 0.03) and angle path (r s(AP) = -0.426, p = 0.01) score when completing the LCN task. No other significant correlations were found between the visual spatial tests (PicSOr, CC or CR) and LCC performance. While identifying selection tests for incoming surgical trainees that predict technical skill performance is appealing, the surrogate markers evaluated correlate with specific metrics of surgical performance related to a single task but do not appear to reliably predict technical performance of different laparoscopic tasks. Predicting the acquisition of technical skills will require the development of a series of evidence-based tests that measure a number of innate abilities as well as their inherent interactions.
What Do Eye Gaze Metrics Tell Us about Motor Imagery?
Poiroux, Elodie; Cavaro-Ménard, Christine; Leruez, Stéphanie; Lemée, Jean Michel; Richard, Isabelle; Dinomais, Mickael
2015-01-01
Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to "spy" on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.
Effects of complete monocular deprivation in visuo-spatial memory.
Cattaneo, Zaira; Merabet, Lotfi B; Bhatt, Ela; Vecchi, Tomaso
2008-09-30
Monocular deprivation has been associated with both specific deficits and enhancements in visual perception and processing. In this study, performance on a visuo-spatial memory task was compared in congenitally monocular individuals and sighted control individuals viewing monocularly (i.e., patched) and binocularly. The task required the individuals to view and memorize a series of target locations on two-dimensional matrices. Overall, congenitally monocular individuals performed worse than sighted individuals (with a specific deficit in simultaneously maintaining distinct spatial representations in memory), indicating that the lack of binocular visual experience affects the way visual information is represented in visuo-spatial memory. No difference was observed between the monocular and binocular viewing control groups, suggesting that early monocular deprivation affects the development of cortical mechanisms mediating visuo-spatial cognition.
Age Changes in Attention Control: Assessing the Role of Stimulus Contingencies
ERIC Educational Resources Information Center
Brodeur, Darlene A.
2004-01-01
Children (ages 5, 7, and 9 years) and young adults completed two visual attention tasks that required them to make a forced choice identification response to a target shape presented in the center of a computer screen. In the first task (high correlation condition) each target was flanked with the same distracters on 80% of the trials (valid…
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
The role of extra-foveal processing in 3D imaging
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.
2017-03-01
The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).
Loughman, James; Davison, Peter; Flitcroft, Ian
2007-11-01
Preattentive visual search (PAVS) describes rapid and efficient retinal and neural processing capable of immediate target detection in the visual field. Damage to the nerve fibre layer or visual pathway might reduce the efficiency with which the visual system performs such analysis. The purpose of this study was to test the hypothesis that patients with glaucoma are impaired on parallel search tasks, and that this would serve to distinguish glaucoma in early cases. Three groups of observers (glaucoma patients, suspect and normal individuals) were examined, using computer-generated flicker, orientation, and vertical motion displacement targets to assess PAVS efficiency. The task required rapid and accurate localisation of a singularity embedded in a field of 119 homogeneous distractors on either the left or right-hand side of a computer monitor. All subjects also completed a choice reaction time (CRT) task. Independent sample T tests revealed PAVS efficiency to be significantly impaired in the glaucoma group compared with both normal and suspect individuals. Performance was impaired in all types of glaucoma tested. Analysis between normal and suspect individuals revealed a significant difference only for motion displacement response times. Similar analysis using a PAVS/CRT index confirmed the glaucoma findings but also showed statistically significant differences between suspect and normal individuals across all target types. A test of PAVS efficiency appears capable of differentiating early glaucoma from both normal and suspect cases. Analysis incorporating a PAVS/CRT index enhances the diagnostic capacity to differentiate normal from suspect cases.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
A Cortical Network for the Encoding of Object Change
Hindy, Nicholas C.; Solomon, Sarah H.; Altmann, Gerry T.M.; Thompson-Schill, Sharon L.
2015-01-01
Understanding events often requires recognizing unique stimuli as alternative, mutually exclusive states of the same persisting object. Using fMRI, we examined the neural mechanisms underlying the representation of object states and object-state changes. We found that subjective ratings of visual dissimilarity between a depicted object and an unseen alternative state of that object predicted the corresponding multivoxel pattern dissimilarity in early visual cortex during an imagery task, while late visual cortex patterns tracked dissimilarity among distinct objects. Early visual cortex pattern dissimilarity for object states in turn predicted the level of activation in an area of left posterior ventrolateral prefrontal cortex (pVLPFC) most responsive to conflict in a separate Stroop color-word interference task, and an area of left ventral posterior parietal cortex (vPPC) implicated in the relational binding of semantic features. We suggest that when visualizing object states, representational content instantiated across early and late visual cortex is modulated by processes in left pVLPFC and left vPPC that support selection and binding, and ultimately event comprehension. PMID:24127425
Dynamic reorganization of human resting-state networks during visuospatial attention.
Spadone, Sara; Della Penna, Stefania; Sestieri, Carlo; Betti, Viviana; Tosoni, Annalisa; Perrucci, Mauro Gianni; Romani, Gian Luca; Corbetta, Maurizio
2015-06-30
Fundamental problems in neuroscience today are understanding how patterns of ongoing spontaneous activity are modified by task performance and whether/how these intrinsic patterns influence task-evoked activation and behavior. We examined these questions by comparing instantaneous functional connectivity (IFC) and directed functional connectivity (DFC) changes in two networks that are strongly correlated and segregated at rest: the visual (VIS) network and the dorsal attention network (DAN). We measured how IFC and DFC during a visuospatial attention task, which requires dynamic selective rerouting of visual information across hemispheres, changed with respect to rest. During the attention task, the two networks remained relatively segregated, and their general pattern of within-network correlation was maintained. However, attention induced a decrease of correlation in the VIS network and an increase of the DAN→VIS IFC and DFC, especially in a top-down direction. In contrast, within the DAN, IFC was not modified by attention, whereas DFC was enhanced. Importantly, IFC modulations were behaviorally relevant. We conclude that a stable backbone of within-network functional connectivity topography remains in place when transitioning between resting wakefulness and attention selection. However, relative decrease of correlation of ongoing "idling" activity in visual cortex and synchronization between frontoparietal and visual cortex were behaviorally relevant, indicating that modulations of resting activity patterns are important for task performance. Higher order resting connectivity in the DAN was relatively unaffected during attention, potentially indicating a role for simultaneous ongoing activity as a "prior" for attention selection.
The effect of encoding conditions on learning in the prototype distortion task.
Lee, Jessica C; Livesey, Evan J
2017-06-01
The prototype distortion task demonstrates that it is possible to learn about a category of physically similar stimuli through mere observation. However, there have been few attempts to test whether different encoding conditions affect learning in this task. This study compared prototypicality gradients produced under incidental learning conditions in which participants performed a visual search task, with those produced under intentional learning conditions in which participants were required to memorize the stimuli. Experiment 1 showed that similar prototypicality gradients could be obtained for category endorsement and familiarity ratings, but also found (weaker) prototypicality gradients in the absence of exposure. In Experiments 2 and 3, memorization was found to strengthen prototypicality gradients in familiarity ratings in comparison to visual search, but there were no group differences in participants' ability to discriminate between novel and presented exemplars. Although the Search groups in Experiments 2 and 3 produced prototypicality gradients, they were no different in magnitude to those produced in the absence of stimulus exposure in Experiment 1, suggesting that incidental learning during visual search was not conducive to producing prototypicality gradients. This study suggests that learning in the prototype distortion task is not implicit in the sense of resulting automatically from exposure, is affected by the nature of encoding, and should be considered in light of potential learning-at-test effects.
Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.
Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.
2018-01-01
Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536
Qi, Geqi; Li, Xiujun; Yan, Tianyi; Wang, Bin; Yang, Jiajia; Wu, Jinglong; Guo, Qiyong
2014-04-30
Visual word expertise is typically associated with enhanced ventral occipito-temporal (vOT) cortex activation in response to written words. Previous study utilized a passive viewing task and found that vOT response to written words was significantly stronger in literate compared to the illiterate subjects. However, recent neuroimaging findings have suggested that vOT response properties are highly dependent upon the task demand. Thus, it is unknown whether literate adults would show stronger vOT response to written words compared to illiterate adults during other cognitive tasks, such as perceptual matching. We addressed this issue by comparing vOT activations between literate and illiterate adults during a Chinese character and simple figure matching task. Unlike passive viewing, a perceptual matching task requires active shape comparison, therefore minimizing automatic word processing bias. We found that although the literate group performed better at Chinese character matching task, the two subject groups showed similar strong vOT responses during this task. Overall, the findings indicate that the vOT response to written words is not affected by expertise during a perceptual matching task, suggesting that the association between visual word expertise and vOT response may depend on the task demand. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A risk-based coverage model for video surveillance camera control optimization
NASA Astrophysics Data System (ADS)
Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua
2015-12-01
Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.
Factors modulating the effect of divided attention during retrieval of words.
Fernandes, Myra A; Moscovitch, Morris
2002-07-01
In this study, we examined variables modulating interference effects on episodic memory under divided attention conditions during retrieval for a list of unrelated words. In Experiment 1, we found that distracting tasks that required animacy or syllable decisions to visually presented words, without a memory load, produced large interference on free recall performance. In Experiment 2, a distracting task requiring phonemic decisions about nonsense words produced a far larger interference effect than one that required semantic decisions about pictures. In Experiment 3, we replicated the effect of the nonsense-word distracting task on memory and showed that an equally resource-demanding picture-based task produced significant interference with memory retrieval, although the effect was smaller in magnitude. Taken together, the results suggest that free recall is disrupted by competition for phonological or word-form representations during retrieval and, to a lesser extent, by competition for semantic representations.
Brain activations during bimodal dual tasks depend on the nature and combination of component tasks
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2015-01-01
We used functional magnetic resonance imaging to investigate brain activations during nine different dual tasks in which the participants were required to simultaneously attend to concurrent streams of spoken syllables and written letters. They performed a phonological, spatial or “simple” (speaker-gender or font-shade) discrimination task within each modality. We expected to find activations associated specifically with dual tasking especially in the frontal and parietal cortices. However, no brain areas showed systematic dual task enhancements common for all dual tasks. Further analysis revealed that dual tasks including component tasks that were according to Baddeley's model “modality atypical,” that is, the auditory spatial task or the visual phonological task, were not associated with enhanced frontal activity. In contrast, for other dual tasks, activity specifically associated with dual tasking was found in the left or bilateral frontal cortices. Enhanced activation in parietal areas, however, appeared not to be specifically associated with dual tasking per se, but rather with intermodal attention switching. We also expected effects of dual tasking in left frontal supramodal phonological processing areas when both component tasks required phonological processing and in right parietal supramodal spatial processing areas when both tasks required spatial processing. However, no such effects were found during these dual tasks compared with their component tasks performed separately. Taken together, the current results indicate that activations during dual tasks depend in a complex manner on specific demands of component tasks. PMID:25767443
Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.
Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min
2013-12-01
Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.
77 FR 11199 - Visual-Manual NHTSA Driver Distraction Guidelines for In-Vehicle Electronic Devices
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-24
...The National Highway Traffic Safety Administration (NHTSA) is concerned about the effects of distraction due to drivers' use of electronic devices on motor vehicle safety. Consequently, NHTSA is issuing nonbinding, voluntary NHTSA Driver Distraction Guidelines (NHTSA Guidelines) to promote safety by discouraging the introduction of excessively distracting devices in vehicles. This notice details the contents of the first phase of the NHTSA Driver Distraction Guidelines. These NHTSA Guidelines cover original equipment in-vehicle device secondary tasks (communications, entertainment, information gathering, and navigation tasks not required to drive are considered secondary tasks) performed by the driver through visual-manual means (meaning the driver looking at a device, manipulating a device-related control with the driver's hand, and watching for visual feedback). The proposed NHTSA Guidelines list certain secondary, non-driving related tasks that, based on NHTSA's research, are believed by the agency to interfere inherently with a driver's ability to safely control the vehicle. The Guidelines recommend that those in-vehicle devices be designed so that they cannot be used by the driver to perform such tasks while the driver is driving. For all other secondary, non-driving-related visual-manual tasks, the NHTSA Guidelines specify a test method for measuring the impact of task performance on driving safety while driving and time-based acceptance criteria for assessing whether a task interferes too much with driver attention to be suitable to perform while driving. If a task does not meet the acceptance criteria, the NHTSA Guidelines recommend that in- vehicle devices be designed so that the task cannot be performed by the driver while driving. In addition to identifying inherently distracting tasks and providing a means for measuring and evaluating the level of distraction associated with other non-driving-related tasks, the NHTSA Guidelines contain several design recommendations for in-vehicle devices in order to minimize their potential for distraction. NHTSA seeks comments on these NHTSA Guidelines and any suggestions for how to improve them so as to better enhance motor vehicle safety.
1987-09-15
memory task. Subjects in the experiment were required to monitor a visual display and update the status of four categories of information that changed ...Kahneman, D., 1966, Pupillary changes in two memory tasks, Psychonomic Science, 55:371-372. Casali, J. G. and Wierwille, W. W., 1982, A sensitivity...operator to deal with the demands. 3. The level of operator performance that results from the inter - action of task demands and capacity/effort
Playing checkers: detection and eye hand coordination in simulated prosthetic vision
NASA Astrophysics Data System (ADS)
Dagnelie, Gislin; Walter, Matthias; Yang, Liancheng
2006-09-01
In order to assess the potential for visual inspection and eye hand coordination without tactile feedback under conditions that may be available to future retinal prosthesis wearers, we studied the ability of sighted individuals to act upon pixelized visual information at very low resolution, equivalent to 20/2400 visual acuity. Live images from a head-mounted camera were low-pass filtered and presented in a raster of 6 × 10 circular Gaussian dots. Subjects could either freely move their gaze across the raster (free-viewing condition) or the raster position was locked to the subject's gaze by means of video-based pupil tracking (gaze-locked condition). Four normally sighted and one severely visually impaired subject with moderate nystagmus participated in a series of four experiments. Subjects' task was to count 1 to 16 white fields randomly distributed across an otherwise black checkerboard (counting task) or to place a black checker on each of the white fields (placing task). We found that all subjects were capable of learning both tasks after varying amounts of practice, both in the free-viewing and in the gaze-locked conditions. Normally sighted subjects all reached very similar performance levels independent of the condition. The practiced performance level of the visually impaired subject in the free-viewing condition was indistinguishable from that of the normally sighted subjects, but required approximately twice the amount of time to place checkers in the gaze-locked condition; this difference is most likely attributable to this subject's nystagmus. Thus, if early retinal prosthesis wearers can achieve crude form vision, then on the basis of these results they too should be able to perform simple eye hand coordination tasks without tactile feedback.
Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H
2015-06-01
To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.
Beyond the real world: attention debates in auditory mismatch negativity.
Chung, Kyungmi; Park, Jin Young
2018-04-11
The aim of this study was to address the potential for the auditory mismatch negativity (aMMN) to be used in applied event-related potential (ERP) studies by determining whether the aMMN would be an attention-dependent ERP component and could be differently modulated across visual tasks or virtual reality (VR) stimuli with different visual properties and visual complexity levels. A total of 80 participants, aged 19-36 years, were assigned to either a reading-task (21 men and 19 women) or a VR-task (22 men and 18 women) group. Two visual-task groups of healthy young adults were matched in age, sex, and handedness. All participants were instructed to focus only on the given visual tasks and ignore auditory change detection. While participants in the reading-task group read text slides, those in the VR-task group viewed three 360° VR videos in a random order and rated how visually complex the given virtual environment was immediately after each VR video ended. Inconsistent with the finding of a partial significant difference in perceived visual complexity in terms of brightness of virtual environments, both visual properties of distance and brightness showed no significant differences in the modulation of aMMN amplitudes. A further analysis was carried out to compare elicited aMMN amplitudes of a typical MMN task and an applied VR task. No significant difference in the aMMN amplitudes was found across the two groups who completed visual tasks with different visual-task demands. In conclusion, the aMMN is a reliable ERP marker of preattentive cognitive processing for auditory deviance detection.
Social Experience Does Not Abolish Cultural Diversity in Eye Movements
Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto
2011-01-01
Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
Brocher, Andreas; Harbecke, Raphael; Graf, Tim; Memmert, Daniel; Hüttermann, Stefanie
2018-03-07
We tested the link between pupil size and the task effort involved in covert shifts of visual attention. The goal of this study was to establish pupil size as a marker of attentional shifting in the absence of luminance manipulations. In three experiments, participants evaluated two stimuli that were presented peripherally, appearing equidistant from and on opposite sides of eye fixation. The angle between eye fixation and the peripherally presented target stimuli varied from 12.5° to 42.5°. The evaluation of more distant stimuli led to poorer performance than did the evaluation of more proximal stimuli throughout our study, confirming that the former required more effort than the latter. In addition, in Experiment 1 we found that pupil size increased with increasing angle and that this effect could not be reduced to the operation of low-level visual processes in the task. In Experiment 2 the pupil dilated more strongly overall when participants evaluated the target stimuli, which required shifts of attention, than when they merely reported on the target's presence versus absence. Both conditions yielded larger pupils for more distant than for more proximal stimuli, however. In Experiment 3, we manipulated task difficulty more directly, by changing the contrast at which the target stimuli were presented. We replicated the results from Experiment 1 only with the high-contrast stimuli. With stimuli of low contrast, ceiling effects in pupil size were observed. Our data show that the link between task effort and pupil size can be used to track the degree to which an observer covertly shifts attention to or detects stimuli in peripheral vision.
NASA Astrophysics Data System (ADS)
Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo
2009-04-01
In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.
ERIC Educational Resources Information Center
Chevalier, Nicolas; Blaye, Agnes; Dufau, Stephane; Lucenet, Joanna
2010-01-01
This study investigated the visual information that children and adults consider while switching or maintaining object-matching rules. Eye movements of 5- and 6-year-old children and adults were collected with two versions of the Advanced Dimensional Change Card Sort, which requires switching between shape- and color-matching rules. In addition to…
Scientific Visualization of Radio Astronomy Data using Gesture Interaction
NASA Astrophysics Data System (ADS)
Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.
2015-09-01
MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.
ERIC Educational Resources Information Center
Robert, Nicole D.; LeFevre, Jo-Anne
2013-01-01
Does solving subtraction problems with negative answers (e.g., 5-14) require different cognitive processes than solving problems with positive answers (e.g., 14-5)? In a dual-task experiment, young adults (N=39) combined subtraction with two working memory tasks, verbal memory and visual-spatial memory. All of the subtraction problems required…
Pasqualotti, Léa; Baccino, Thierry
2014-01-01
Most of studies about online advertisements have indicated that they have a negative impact on users' cognitive processes, especially when they include colorful or animated banners and when they are close to the text to be read. In the present study we assessed the effects of two advertisements features—distance from the text and the animation—on visual strategies during a word-search task and a reading-for-comprehension task using Web-like pages. We hypothesized that the closer the advertisement was to the target text, the more cognitive processing difficulties it would cause. We also hypothesized that (1) animated banners would be more disruptive than static advertisements and (2) banners would have more effect on word-search performance than reading-for-comprehension performance. We used an automatic classifier to assess variations in use of Scanning and Reading visual strategies during task performance. The results showed that the effect of dynamic and static advertisements on visual strategies varies according to the task. Fixation duration indicated that the closest advertisements slowed down information processing but there was no difference between the intermediate (40 pixel) and far (80 pixel) distance conditions. Our findings suggest that advertisements have a negative impact on users' performance mostly when a lots of cognitive resources are required as for reading-for-comprehension. PMID:24672501
Task-specific reorganization of the auditory cortex in deaf humans
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-01
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964
Task-specific reorganization of the auditory cortex in deaf humans.
Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin
2017-01-24
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
The roles of stimulus repetition and hemispheric activation in visual half-field asymmetries.
Sullivan, K F; McKeever, W F
1985-10-01
Hardyck, Tzeng, and Wang (1978, Brain and Language, 5, 56-71) hypothesized that ample repetition of a small number of stimuli is required in order to obtain VHF differences in tachistoscopic tasks. Four experiments, with varied levels of repetition, were conducted to test this hypothesis. Three experiments utilized the general task of object-picture naming and one utilized a word-naming task. Naming latencies constituted the dependent measure. The results demonstrate that for the object-naming paradigm repetition is required for RVF superiority to emerge. Repetition was found to be unnecessary for RVF superiority in the word-naming paradigm, with repetition actually reducing RVF superiority. Experiment I suggested the possibility that RVF superiority developed for the second half of the trials as a function of practice or hemispheric activation, regardless of repetition level. Subsequent experiments, better designed to assess this possibility, clearly refuted it. It was concluded that the effect of repetition depends on the processing requirements of the task. We propose that, for tasks which can be processed efficiently by one hemisphere, the effect of repetition will be to reduce VHF asymmetries; but tasks requiring substantial processing by both hemispheres will show shifts to RVF superiority as a function of repetition.
Effects of age and eccentricity on visual target detection.
Gruber, Nicole; Müri, René M; Mosimann, Urs P; Bieri, Rahel; Aeschimann, Andrea; Zito, Giuseppe A; Urwyler, Prabitha; Nyffeler, Thomas; Nef, Tobias
2013-01-01
The aim of this study was to examine the effects of aging and target eccentricity on a visual search task comprising 30 images of everyday life projected into a hemisphere, realizing a ±90° visual field. The task performed binocularly allowed participants to freely move their eyes to scan images for an appearing target or distractor stimulus (presented at 10°; 30°, and 50° eccentricity). The distractor stimulus required no response, while the target stimulus required acknowledgment by pressing the response button. One hundred and seventeen healthy subjects (mean age = 49.63 years, SD = 17.40 years, age range 20-78 years) were studied. The results show that target detection performance decreases with age as well as with increasing eccentricity, especially for older subjects. Reaction time also increases with age and eccentricity, but in contrast to target detection, there is no interaction between age and eccentricity. Eye movement analysis showed that younger subjects exhibited a passive search strategy while older subjects exhibited an active search strategy probably as a compensation for their reduced peripheral detection performance.
Vision and the representation of the surroundings in spatial memory
Tatler, Benjamin W.; Land, Michael F.
2011-01-01
One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience. PMID:21242146
Goard, Michael J; Pho, Gerald N; Woodson, Jonathan; Sur, Mriganka
2016-01-01
Mapping specific sensory features to future motor actions is a crucial capability of mammalian nervous systems. We investigated the role of visual (V1), posterior parietal (PPC), and frontal motor (fMC) cortices for sensorimotor mapping in mice during performance of a memory-guided visual discrimination task. Large-scale calcium imaging revealed that V1, PPC, and fMC neurons exhibited heterogeneous responses spanning all task epochs (stimulus, delay, response). Population analyses demonstrated unique encoding of stimulus identity and behavioral choice information across regions, with V1 encoding stimulus, fMC encoding choice even early in the trial, and PPC multiplexing the two variables. Optogenetic inhibition during behavior revealed that all regions were necessary during the stimulus epoch, but only fMC was required during the delay and response epochs. Stimulus identity can thus be rapidly transformed into behavioral choice, requiring V1, PPC, and fMC during the transformation period, but only fMC for maintaining the choice in memory prior to execution. DOI: http://dx.doi.org/10.7554/eLife.13764.001 PMID:27490481
Watanabe, Tatsunori; Tsutou, Kotaro; Saito, Kotaro; Ishida, Kazuto; Tanabe, Shigeo; Nojima, Ippei
2016-11-01
Choice reaction requires response conflict resolution, and the resolution processes that occur during a choice stepping reaction task undertaken in a standing position, which requires maintenance of balance, may be different to those processes occurring during a choice reaction task performed in a seated position. The study purpose was to investigate the resolution processes during a choice stepping reaction task at the cortical level using electroencephalography and compare the results with a control task involving ankle dorsiflexion responses. Twelve young adults either stepped forward or dorsiflexed the ankle in response to a visual imperative stimulus presented on a computer screen. We used the Simon task and examined the error-related negativity (ERN) that follows an incorrect response and the correct-response negativity (CRN) that follows a correct response. Error was defined as an incorrect initial weight transfer for the stepping task and as an incorrect initial tibialis anterior activation for the control task. Results revealed that ERN and CRN amplitudes were similar in size for the stepping task, whereas the amplitude of ERN was larger than that of CRN for the control task. The ERN amplitude was also larger in the stepping task than the control task. These observations suggest that a choice stepping reaction task involves a strategy emphasizing post-response conflict and general performance monitoring of actual and required responses and also requires greater cognitive load than a choice dorsiflexion reaction. The response conflict resolution processes appear to be different for stepping tasks and reaction tasks performed in a seated position.
The effects of task difficulty and resource requirements on attention strategies
NASA Technical Reports Server (NTRS)
King, Teresa
1991-01-01
The patterns of attention strategies for task difficulty/resource tasks for which experimental results are presented and analyzed support the hypothesis that subjects may adopt an alternating (rather than concurrent one) when compelled to do so by either the size or the complexity of a visual display. According to the multiple resource model, if subjects had been performing the two tasks concurrently, the cost of this strategy would have been shown by a decrement in the spatial format, rather than the verbal format, due to competition for the same resource. Subjects may apply different strategies as a function of task difficulty and/or resource demand.
WISP information display system user's manual
NASA Technical Reports Server (NTRS)
Alley, P. L.; Smith, G. R.
1978-01-01
The wind shears program (WISP) supports the collection of data on magnetic tape for permanent storage or analysis. The document structure provides: (1) the hardware and software configuration required to execute the WISP system and start up procedure from a power down condition; (2) data collection task, calculations performed on the incoming data, and a description of the magnetic tape format; (3) the data display task and examples of displays obtained from execution of the real time simulation program; and (4) the raw data dump task and examples of operator actions required to obtained the desired format. The procedures outlines herein will allow continuous data collection at the expense of real time visual displays.
Interference with olfactory memory by visual and verbal tasks.
Annett, J M; Cook, N M; Leslie, J C
1995-06-01
It has been claimed that olfactory memory is distinct from memory in other modalities. This study investigated the effectiveness of visual and verbal tasks in interfering with olfactory memory and included methodological changes from other recent studies. Subjects were allocated to one of four experimental conditions involving interference tasks [no interference task; visual task; verbal task; visual-plus-verbal task] and presented 15 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Recognition and recall performance both showed effects of interference of visual and verbal tasks but there was no effect for time of testing. While the results may be accommodated within a dual coding framework, further work is indicated to resolve theoretical issues relating to task complexity.
Poole, Bradley J; Kane, Michael J
2009-07-01
Variation in working-memory capacity (WMC) predicts individual differences in only some attention-control capabilities. Whereas higher WMC subjects outperform lower WMC subjects in tasks requiring the restraint of prepotent but inappropriate responses, and the constraint of attentional focus to target stimuli against distractors, they do not differ in prototypical visual-search tasks, even those that yield steep search slopes and engender top-down control. The present three experiments tested whether WMC, as measured by complex memory span tasks, would predict search latencies when the 1-8 target locations to be searched appeared alone, versus appearing among distractor locations to be ignored, with the latter requiring selective attentional focus. Subjects viewed target-location cues and then fixated on those locations over either long (1,500-1,550 ms) or short (300 ms) delays. Higher WMC subjects identified targets faster than did lower WMC subjects only in the presence of distractors and only over long fixation delays. WMC thus appears to affect subjects' ability to maintain a constrained attentional focus over time.
Kane, Michael J; Poole, Bradley J; Tuholski, Stephen W; Engle, Randall W
2006-07-01
The executive attention theory of working memory capacity (WMC) proposes that measures of WMC broadly predict higher order cognitive abilities because they tap important and general attention capabilities (R. W. Engle & M. J. Kane, 2004). Previous research demonstrated WMC-related differences in attention tasks that required restraint of habitual responses or constraint of conscious focus. To further specify the executive attention construct, the present experiments sought boundary conditions of the WMC-attention relation. Three experiments correlated individual differences in WMC, as measured by complex span tasks, and executive control of visual search. In feature-absence search, conjunction search, and spatial configuration search, WMC was unrelated to search slopes, although they were large and reliably measured. Even in a search task designed to require the volitional movement of attention (J. M. Wolfe, G. A. Alvarez, & T. S. Horowitz, 2000), WMC was irrelevant to performance. Thus, WMC is not associated with all demanding or controlled attention processes, which poses problems for some general theories of WMC. Copyright 2006 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Gomes, Gary G.
1986-05-01
A cost effective and supportable color visual system has been developed to provide the necessary visual cues to United States Air Force B-52 bomber pilots training to become proficient at the task of inflight refueling. This camera model visual system approach is not suitable for all simulation applications, but provides a cost effective alternative to digital image generation systems when high fidelity of a single movable object is required. The system consists of a three axis gimballed KC-l35 tanker model, a range carriage mounted color augmented monochrome television camera, interface electronics, a color light valve projector and an infinity optics display system.
Fitts’ Law in the Control of Isometric Grip Force With Naturalistic Targets
Thumser, Zachary C.; Slifkin, Andrew B.; Beckler, Dylan T.; Marasco, Paul D.
2018-01-01
Fitts’ law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts’ law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts’ law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts’ law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts’ law (average r2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts’ law for explicit targets with vision (r2 = 0.96) and implicit targets (r2 = 0.89), but not as well-described for explicit targets without vision (r2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts’ law to quantify the relative speed-accuracy relationship of any given grasper. PMID:29773999
Non-visual spatial tasks reveal increased interactions with stance postural control.
Woollacott, Marjorie; Vander Velde, Timothy
2008-05-07
The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.
How does cognitive load influence speech perception? An encoding hypothesis.
Mitterer, Holger; Mattys, Sven L
2017-01-01
Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.
Is cross-modal integration of emotional expressions independent of attentional resources?
Vroomen, J; Driver, J; de Gelder, B
2001-12-01
In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.
Is Mc Leod's Patent Pending Naturoptic Method for Restoring Healthy Vision Easy and Verifiable?
NASA Astrophysics Data System (ADS)
Niemi, Paul; McLeod, David; McLeod, Roger
2006-10-01
RDM asserts that he and people he has trained can assign visual tasks from standard vision assessment charts, or better replacements, proceeding through incremental changes and such rapid improvements that healthy vision can be restored. Mc Leod predicts that in visual tasks with pupil diameter changes, wavelengths change proportionally. A longer, quasimonochromatic wavelength interval is coincident with foveal cones, and rods. A shorter, partially overlapping interval separately aligns with extrafoveal cones. Wavelengths follow the Airy disk radius formula. Niemi can evaluate if it is true that visual health merely requires triggering and facilitating the demands of possibly overridden feedback signals. The method and process are designed so that potential Naturopathic and other select graduate students should be able to self-fund their higher- level educations from preferential franchising arrangements of earnings while they are in certain programs.
Secondary visual workload capability with primary visual and kinesthetic-tactual displays
NASA Technical Reports Server (NTRS)
Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.
1978-01-01
Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.
Visual cue-specific craving is diminished in stressed smokers.
Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R
2017-09-01
Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p < .001). Interactions indicated craving in those who completed the stress task first differed from those who completed the visual cues task first (p < .05), such that stress task craving was greater than all image type craving (all p's < .05) only if the visual cue task was completed first. Conversely, craving was stable across image types when the stress task was completed first. Findings indicate when smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.
Recognition and reading aloud of kana and kanji word: an fMRI study.
Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao
2009-03-16
It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.
Multi-modal information processing for visual workload relief
NASA Technical Reports Server (NTRS)
Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.
1980-01-01
The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.
Training eye movements for visual search in individuals with macular degeneration
Janssen, Christian P.; Verghese, Preeti
2016-01-01
We report a method to train individuals with central field loss due to macular degeneration to improve the efficiency of visual search. Our method requires participants to make a same/different judgment on two simple silhouettes. One silhouette is presented in an area that falls within the binocular scotoma while they are fixating the center of the screen with their preferred retinal locus (PRL); the other silhouette is presented diametrically opposite within the intact visual field. Over the course of 480 trials (approximately 6 hr), we gradually reduced the amount of time that participants have to make a saccade and judge the similarity of stimuli. This requires that they direct their PRL first toward the stimulus that is initially hidden behind the scotoma. Results from nine participants show that all participants could complete the task faster with training without sacrificing accuracy on the same/different judgment task. Although a majority of participants were able to direct their PRL toward the initially hidden stimulus, the ability to do so varied between participants. Specifically, six of nine participants made faster saccades with training. A smaller set (four of nine) made accurate saccades inside or close to the target area and retained this strategy 2 to 3 months after training. Subjective reports suggest that training increased awareness of the scotoma location for some individuals. However, training did not transfer to a different visual search task. Nevertheless, our study suggests that increasing scotoma awareness and training participants to look toward their scotoma may help them acquire missing information. PMID:28027382
A rodent brain-machine interface paradigm to study the impact of paraplegia on BMI performance.
Bridges, Nathaniel R; Meyers, Michael; Garcia, Jonathan; Shewokis, Patricia A; Moxon, Karen A
2018-05-31
Most brain machine interfaces (BMI) focus on upper body function in non-injured animals, not addressing the lower limb functional needs of those with paraplegia. A need exists for a novel BMI task that engages the lower body and takes advantage of well-established rodent spinal cord injury (SCI) models to study methods to improve BMI performance. A tilt BMI task was designed that randomly applies different types of tilts to a platform, decodes the tilt type applied and rights the platform if the decoder correctly classifies the tilt type. The task was tested on female rats and is relatively natural such that it does not require the animal to learn a new skill. It is self-rewarding such that there is no need for additional rewards, eliminating food or water restriction, which can be especially hard on spinalized rats. Finally, task difficulty can be adjusted by making the tilt parameters. This novel BMI task bilaterally engages the cortex without visual feedback regarding limb position in space and animals learn to improve their performance both pre and post-SCI.Comparison with Existing Methods: Most BMI tasks primarily engage one hemisphere, are upper-body, rely heavily on visual feedback, do not perform investigations in animal models of SCI, and require nonnaturalistic extrinsic motivation such as water rewarding for performance improvement. Our task addresses these gaps. The BMI paradigm presented here will enable researchers to investigate the interaction of plasticity after SCI and plasticity during BMI training on performance. Copyright © 2018. Published by Elsevier B.V.
The effects of combined caffeine and glucose drinks on attention in the human brain.
Rao, Anling; Hu, Henglong; Nobre, Anna Christina
2005-06-01
The objective of this research was to measure the effects of energising drinks containing caffeine and glucose, upon mental activity during sustained selective attention. Non-invasive electrophysiological brain recordings were made during a behavioural study of selective attention in which participants received either energising or placebo drinks. We tested specifically whether energising drinks have significant effects upon behavioural measures of performance during a task requiring sustained visual selective attention, as well as on accompanying components of the event-related potential (ERPs) related to information processing in the brain. Forty healthy volunteers were blindly assigned to receive either the energising drink or a similar-tasting placebo drink. The behavioural task involved identifying predefined target stimulus among rapidly presented streams of peripheral visual stimuli, and making speeded motor responses to this stimulus. During task performance, accuracy, reaction times and ongoing brain activity were stored for analysis. The energising drink enhanced behavioural performance both in terms of accuracy and speed of reactions. The energising drink also had significant effects upon the event-related potentials. Effects started from the enhancement of the earliest components (Cl/P1), reflecting early visual cortical processing in the energising-drink group relative to the placebo group over the contralateral scalp. The later N1, N2 and P3 components related to decision-making and responses were also modulated by the energising drink. Energising drinks containing caffeine and glucose can enhance behavioural performance during demanding tasks requiring selective attention. The behavioural benefits are coupled to direct effects upon neural information processing.
Gaze shifts and fixations dominate gaze behavior of walking cats
Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.
2014-01-01
Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656
Coherence and interlimb force control: Effects of visual gain.
Kang, Nyeonju; Cauraugh, James H
2018-03-06
Neural coupling across hemispheres and homologous muscles often appears during bimanual motor control. Force coupling in a specific frequency domain may indicate specific bimanual force coordination patterns. This study investigated coherence on pairs of bimanual isometric index finger force while manipulating visual gain and task asymmetry conditions. We used two visual gain conditions (low and high gain = 8 and 512 pixels/N), and created task asymmetry by manipulating coefficient ratios imposed on the left and right index finger forces (0.4:1.6; 1:1; 1.6:0.4, respectively). Unequal coefficient ratios required different contributions from each hand to the bimanual force task resulting in force asymmetry. Fourteen healthy young adults performed bimanual isometric force control at 20% of their maximal level of the summed force of both fingers. We quantified peak coherence and relative phase angle between hands at 0-4, 4-8, and 8-12 Hz, and estimated a signal-to-noise ratio of bimanual forces. The findings revealed higher peak coherence and relative phase angle at 0-4 Hz than at 4-8 and 8-12 Hz for both visual gain conditions. Further, peak coherence and relative phase angle values at 0-4 Hz were larger at the high gain than at the low gain. At the high gain, higher peak coherence at 0-4 Hz collapsed across task asymmetry conditions significantly predicted greater signal-to-noise ratio. These findings indicate that a greater level of visual information facilitates bimanual force coupling at a specific frequency range related to sensorimotor processing. Copyright © 2018 Elsevier B.V. All rights reserved.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000
Bindings in working memory: The role of object-based attention.
Gao, Zaifeng; Wu, Fan; Qiu, Fangfang; He, Kaifeng; Yang, Yue; Shen, Mowei
2017-02-01
Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018 ). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 4-6) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 1-6 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.
Repetition priming of face recognition in a serial choice reaction-time task.
Roberts, T; Bruce, V
1989-05-01
Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.
Age-related changes in event-cued visual and auditory prospective memory proper.
Uttl, Bob
2006-06-01
We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.
Assistive obstacle detection and navigation devices for vision-impaired users.
Ong, S K; Zhang, J; Nee, A Y C
2013-09-01
Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.
Superior haptic-to-visual shape matching in autism spectrum disorders.
Nakano, Tamami; Kato, Nobumasa; Kitazawa, Shigeru
2012-04-01
A weak central coherence theory in autism spectrum disorder (ASD) proposes that a cognitive bias toward local processing in ASD derives from a weakness in integrating local elements into a coherent whole. Using this theory, we hypothesized that shape perception through active touch, which requires sequential integration of sensorimotor traces of exploratory finger movements into a shape representation, would be impaired in ASD. Contrary to our expectation, adults with ASD showed superior performance in a haptic-to-visual delayed shape-matching task compared to adults without ASD. Accuracy in discriminating haptic lengths or haptic orientations, which lies within the somatosensory modality, did not differ between adults with ASD and adults without ASD. Moreover, this superior ability in inter-modal haptic-to-visual shape matching was not explained by the score in a unimodal visuospatial rotation task. These results suggest that individuals with ASD are not impaired in integrating sensorimotor traces into a global visual shape and that their multimodal shape representations and haptic-to-visual information transfer are more accurate than those of individuals without ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.
Visual search for feature and conjunction targets with an attention deficit.
Arguin, M; Joanette, Y; Cavanagh, P
1993-01-01
Abstract Brain-damaged subjects who had previously been identified as suffering from a visual attention deficit for contralesional stimulation were tested on a series of visual search tasks. The experiments examined the hypothesis that the processing of single features is preattentive but that feature integration, necessary for the correct perception of conjunctions of features, requires attention (Treisman & Gelade, 1980 Treisman & Sato, 1990). Subjects searched for a feature target (orientation or color) or for a conjunction target (orientation and color) in unilateral displays in which the number of items presented was variable. Ocular fixation was controlled so that trials on which eye movements occurred were cancelled. While brain-damaged subjects with a visual attention disorder (VAD subjects) performed similarly to normal controls in feature search tasks, they showed a marked deficit in conjunction search. Specifically, VAD subjects exhibited an important reduction of their serial search rates for a conjunction target with contralesional displays. In support of Treisman's feature integration theory, a visual attention deficit leads to a marked impairment in feature integration whereas it does not appear to affect feature encoding.
Dissociating 'what' and 'how' in visual form agnosia: a computational investigation.
Vecera, S P
2002-01-01
Patients with visual form agnosia exhibit a profound impairment in shape perception (what an object is) coupled with intact visuomotor functions (how to act on an object), demonstrating a dissociation between visual perception and action. How can these patients act on objects that they cannot perceive? Although two explanations of this 'what-how' dissociation have been offered, each explanation has shortcomings. A 'pathway information' account of the 'what-how' dissociation is presented in this paper. This account hypothesizes that 'where' and 'how' tasks require less information than 'what' tasks, thereby allowing 'where/how' to remain relatively spared in the face of neurological damage. Simulations with a neural network model test the predictions of the pathway information account. Following damage to an input layer common to the 'what' and 'where/how' pathways, the model performs object identification more poorly than spatial localization. Thus, the model offers a parsimonious explanation of differential 'what-how' performance in visual form agnosia. The simulation results are discussed in terms of their implications for visual form agnosia and other neuropsychological syndromes.
The modality effect of ego depletion: Auditory task modality reduces ego depletion.
Li, Qiong; Wang, Zhenhong
2016-08-01
An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
An analysis of the processing requirements of a complex perceptual-motor task
NASA Technical Reports Server (NTRS)
Kramer, A. F.; Wickens, C. D.; Donchin, E.
1983-01-01
Current concerns in the assessment of mental workload are discussed, and the event-related brain potential (ERP) is introduced as a promising mental-workload index. Subjects participated in a series of studies in which they were required to perform a target acquisition task while also covertly counting either auditory or visual probes. The effects of several task-difficulty manipulations on the P300 component of the ERP elicited by the counted stimulus probes were investigated. With sufficiently practiced subjects the amplitude of the P300 was found to decrease with increases in task difficulty. The second experiment also provided evidence that the P300 is selectively sensitive to task-relevant attributes. A third experiment demonstrated a convergence in the amplitude of the P300s elicited in the simple and difficult versions of the tracking task. The amplitude of the P300 was also found to covary with the measures of tracking performance. The results of the series of three experiments illustrate the sensitivity of the P300 to the processing requirements of a complex target acquisition task. The findings are discussed in terms of the multidimensional nature of processing resources.
ERIC Educational Resources Information Center
Jarrold, Christopher; Gilchrist, Iain D.; Bender, Alison
2005-01-01
Individuals with autism show relatively strong performance on tasks that require them to identify the constituent parts of a visual stimulus. This is assumed to be the result of a bias towards processing the local elements in a display that follows from a weakened ability to integrate information at the global level. The results of the current…
Nguyen, Ngan; Mulla, Ali; Nelson, Andrew J; Wilson, Timothy D
2014-01-01
The present study explored the problem-solving strategies of high- and low-spatial visualization ability learners on a novel spatial anatomy task to determine whether differences in strategies contribute to differences in task performance. The results of this study provide further insights into the processing commonalities and differences among learners beyond the classification of spatial visualization ability alone, and help elucidate what, if anything, high- and low-spatial visualization ability learners do differently while solving spatial anatomy task problems. Forty-two students completed a standardized measure of spatial visualization ability, a novel spatial anatomy task, and a questionnaire involving personal self-analysis of the processes and strategies used while performing the spatial anatomy task. Strategy reports revealed that there were different ways students approached answering the spatial anatomy task problems. However, chi-square test analyses established that differences in problem-solving strategies did not contribute to differences in task performance. Therefore, underlying spatial visualization ability is the main source of variation in spatial anatomy task performance, irrespective of strategy. In addition to scoring higher and spending less time on the anatomy task, participants with high spatial visualization ability were also more accurate when solving the task problems. © 2013 American Association of Anatomists.
James, Ella L; Bonsall, Michael B; Hoppitt, Laura; Tunbridge, Elizabeth M; Geddes, John R; Milton, Amy L; Holmes, Emily A
2015-08-01
Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind's eye and cause distress. We investigated whether reconsolidation-the process during which memories become malleable when recalled-can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. © The Author(s) 2015.
James, Ella L.; Bonsall, Michael B.; Hoppitt, Laura; Tunbridge, Elizabeth M.; Geddes, John R.; Milton, Amy L.
2015-01-01
Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind’s eye and cause distress. We investigated whether reconsolidation—the process during which memories become malleable when recalled—can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. PMID:26133572
Baumann, Oliver; Skilleter, Ashley J.; Mattingley, Jason B.
2011-01-01
The goal of the present study was to examine the extent to which working memory supports the maintenance of object locations during active spatial navigation. Participants were required to navigate a virtual environment and to encode the location of a target object. In the subsequent maintenance period they performed one of three secondary tasks that were designed to selectively load visual, verbal or spatial working memory subsystems. Thereafter participants re-entered the environment and navigated back to the remembered location of the target. We found that while navigation performance in participants with high navigational ability was impaired only by the spatial secondary task, navigation performance in participants with poor navigational ability was impaired equally by spatial and verbal secondary tasks. The visual secondary task had no effect on navigation performance. Our results extend current knowledge by showing that the differential engagement of working memory subsystems is determined by navigational ability. PMID:21629686
The Use of Computer-Generated Fading Materials to Teach Visual-Visual Non-Identity Matching Tasks
ERIC Educational Resources Information Center
Murphy, Colleen; Figueroa, Maria; Martin, Garry L.; Yu, C. T.; Figueroa, Josue
2008-01-01
Many everyday matching tasks taught to persons with developmental disabilities are visual-visual non-identity matching (VVNM) tasks, such as matching the printed word DOG to a picture of a dog, or matching a sock to a shoe. Research has shown that, for participants who have failed a VVNM prototype task, it is very difficult to teach them various…
Classification of visual and linguistic tasks using eye-movement features.
Coco, Moreno I; Keller, Frank
2014-03-07
The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Chan, Louis K H; Hayward, William G
2009-02-01
In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. Copyright 2009 APA, all rights reserved.
Picchioni, Dante; Schmidt, Kathleen C; McWhirter, Kelly K; Loutaev, Inna; Pavletic, Adriana J; Speer, Andrew M; Zametkin, Alan J; Miao, Ning; Bishu, Shrinivas; Turetsky, Kate M; Morrow, Anne S; Nadel, Jeffrey L; Evans, Brittney C; Vesselinovitch, Diana M; Sheeler, Carrie A; Balkin, Thomas J; Smith, Carolyn B
2018-05-15
If protein synthesis during sleep is required for sleep-dependent memory consolidation, we might expect rates of cerebral protein synthesis (rCPS) to increase during sleep in the local brain circuits that support performance on a particular task following training on that task. To measure circuit-specific brain protein synthesis during a daytime nap opportunity, we used the L-[1-(11)C]leucine positron emission tomography (PET) method with simultaneous polysomnography. We trained subjects on the visual texture discrimination task (TDT). This was followed by a nap opportunity during the PET scan, and we retested them later in the day after the scan. The TDT is considered retinotopically specific, so we hypothesized that higher rCPS in primary visual cortex would be observed in the trained hemisphere compared to the untrained hemisphere in subjects who were randomized to a sleep condition. Our results indicate that the changes in rCPS in primary visual cortex depended on whether subjects were in the wakefulness or sleep condition but were independent of the side of the visual field trained. That is, only in the subjects randomized to sleep, rCPS in the right primary visual cortex was higher than the left regardless of side trained. Other brain regions examined were not so affected. In the subjects who slept, performance on the TDT improved similarly regardless of the side trained. Results indicate a regionally selective and sleep-dependent effect that occurs with improved performance on the TDT.
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Visual training improves perceptual grouping based on basic stimulus features.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
2017-10-01
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Visual awareness of objects and their colour.
Pilling, Michael; Gellatly, Angus
2011-10-01
At any given moment, our awareness of what we 'see' before us seems to be rather limited. If, for instance, a display containing multiple objects is shown (red or green disks), when one object is suddenly covered at random, observers are often little better than chance in reporting about its colour (Wolfe, Reinecke, & Brawn, Visual Cognition, 14, 749-780, 2006). We tested whether, when object attributes (such as colour) are unknown, observers still retain any knowledge of the presence of that object at a display location. Experiments 1-3 involved a task requiring two-alternative (yes/no) responses about the presence or absence of a colour-defined object at a probed location. On this task, if participants knew about the presence of an object at a location, responses indicated that they also knew about its colour. A fourth experiment presented the same displays but required a three-alternative response. This task did result in a data pattern consistent with participants' knowing more about the locations of objects within a display than about their individual colours. However, this location advantage, while highly significant, was rather small in magnitude. Results are compared with those of Huang (Journal of Vision, 10(10, Art. 24), 1-17, 2010), who also reported an advantage for object locations, but under quite different task conditions.
Use of nontraditional flight displays for the reduction of central visual overload in the cockpit
NASA Technical Reports Server (NTRS)
Weinstein, Lisa F.; Wickens, Christopher D.
1992-01-01
The use of nontraditional flight displays to reduce visual overload in the cockpit was investigated in a dual-task paradigm. Three flight displays (central, peripheral, and ecological) were used between subjects for the primary tasks, and the type of secondary task (object identification or motion judgment) and the presentation of the location of the task in the visual field (central or peripheral) were manipulated with groups. The two visual-spatial tasks were time-shared to study the possibility of a compatibility mapping between task type and task location. The ecological display was found to allow for the most efficient time-sharing.
Borg, Céline; Leroy, Nicolas; Favre, Emilie; Laurent, Bernard; Thomas-Antérion, Catherine
2011-06-01
The present study examines the prediction that emotion can facilitate short-term memory. Nevertheless, emotion also recruits attention to process information, thereby disrupting short-term memory when tasks involve high attentional resources. In this way, we aimed to determine whether there is a differential influence of emotional information on short-term memory in ageing and Alzheimer's disease (AD). Fourteen patients with mild AD, 14 healthy older participants (NC), and 14 younger adults (YA) performed two tasks. In the first task, involving visual short-term memory, participants were asked to remember a picture among four different pictures (negative or neutral) following a brief delay. The second task, a binding memory task, required the recognition by participants of a picture according to its spatial location. The attentional cost involved was higher than for the first task. The pattern of results showed that visual memory performance was better for negative stimuli than for neutral ones, irrespective of the group. In contrast, binding memory performance was essentially poorer for the location of negative pictures in the NC group, and for the location of both negative and neutral stimuli in the AD group, in comparison to the YA group. Taken together, these results show that emotion has beneficial effects on visual short-term memory in ageing and AD. In contrast, emotion does not improve their performances in the binding condition. Copyright © 2011 Elsevier Inc. All rights reserved.
Oculomotor evidence for neocortical systems but not cerebellar dysfunction in autism
Minshew, Nancy J.; Luna, Beatriz; Sweeney, John A.
2010-01-01
Objective To investigate the functional integrity of cerebellar and frontal system in autism using oculomotor paradigms. Background Cerebellar and neocortical systems models of autism have been proposed. Courchesne and colleagues have argued that cognitive deficits such as shifting attention disturbances result from dysfunction of vermal lobules VI and VII. Such a vermal deficit should be associated with dysmetric saccadic eye movements because of the major role these areas play in guiding the motor precision of saccades. In contrast, neocortical models of autism predict intact saccade metrics, but impairments on tasks requiring the higher cognitive control of saccades. Methods A total of 26 rigorously diagnosed nonmentally retarded autistic subjects and 26 matched healthy control subjects were assessed with a visually guided saccade task and two volitional saccade tasks, the oculomotor delayed-response task and the antisaccade task. Results Metrics and dynamic of the visually guided saccades were normal in autistic subjects, documenting the absence of disturbances in cerebellar vermal lobules VI and VII and in automatic shifts of visual attention. Deficits were demonstrated on both volitional saccade tasks, indicating dysfunction in the circuitry of prefrontal cortex and its connections with the parietal cortex, and associated cognitive impairments in spatial working memory and in the ability to voluntarily suppress context-inappropriate responses. Conclusions These findings demonstrate intrinsic neocortical, not cerebellar, dysfunction in autism, and parallel deficits in higher order cognitive mechanisms and not in elementary attentional and sensorimotor systems in autism. PMID:10102406
The use of visual cues for vehicle control and navigation
NASA Technical Reports Server (NTRS)
Hart, Sandra G.; Battiste, Vernol
1991-01-01
At least three levels of control are required to operate most vehicles: (1) inner-loop control to counteract the momentary effects of disturbances on vehicle position; (2) intermittent maneuvers to avoid obstacles, and (3) outer-loop control to maintain a planned route. Operators monitor dynamic optical relationships in their immediate surroundings to estimate momentary changes in forward, lateral, and vertical position, rates of change in speed and direction of motion, and distance from obstacles. The process of searching the external scene to find landmarks (for navigation) is intermittent and deliberate, while monitoring and responding to subtle changes in the visual scene (for vehicle control) is relatively continuous and 'automatic'. However, since operators may perform both tasks simultaneously, the dynamic optical cues available for a vehicle control task may be determined by the operator's direction of gaze for wayfinding. An attempt to relate the visual processes involved in vehicle control and wayfinding is presented. The frames of reference and information used by different operators (e.g., automobile drivers, airline pilots, and helicopter pilots) are reviewed with particular emphasis on the special problems encountered by helicopter pilots flying nap of the earth (NOE). The goal of this overview is to describe the context within which different vehicle control tasks are performed and to suggest ways in which the use of visual cues for geographical orientation might influence visually guided control activities.
Visual recognition and inference using dynamic overcomplete sparse learning.
Murray, Joseph F; Kreutz-Delgado, Kenneth
2007-09-01
We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.
Selective representation of task-relevant objects and locations in the monkey prefrontal cortex.
Everling, Stefan; Tinsley, Chris J; Gaffan, David; Duncan, John
2006-04-01
In the monkey prefrontal cortex (PFC), task context exerts a strong influence on neural activity. We examined different aspects of task context in a temporal search task. On each trial, the monkey (Macaca mulatta) watched a stream of pictures presented to left or right of fixation. The task was to hold fixation until seeing a particular target, and then to make an immediate saccade to it. Sometimes (unilateral task), the attended pictures appeared alone, with a cue at trial onset indicating whether they would be presented to left or right. Sometimes (bilateral task), the attended picture stream (cued side) was accompanied by an irrelevant stream on the opposite side. In two macaques, we recorded responses from a total of 161 cells in the lateral PFC. Many cells (75/161) showed visual responses. Object-selective responses were strongly shaped by task relevance - with stronger responses to targets than to nontargets, failure to discriminate one nontarget from another, and filtering out of information from an irrelevant stimulus stream. Location selectivity occurred rather independently of object selectivity, and independently in visual responses and delay periods between one stimulus and the next. On error trials, PFC activity followed the correct rules of the task, rather than the incorrect overt behaviour. Together, these results suggest a highly programmable system, with responses strongly determined by the rules and requirements of the task performed.
Evans, Simon; Clarke, Devin; Dowell, Nicholas G; Tabet, Naji; King, Sarah L; Hutton, Samuel B; Rusted, Jennifer M
2018-01-01
In this study we investigated effects of the APOE ε4 allele (which confers an enhanced risk of poorer cognitive ageing, and Alzheimer's Disease) on sustained attention (vigilance) performance in young adults using the Rapid Visual Information Processing (RVIP) task and event-related fMRI. Previous fMRI work with this task has used block designs: this study is the first to image an extended (6-minute) RVIP task. Participants were 26 carriers of the APOE ε4 allele, and 26 non carriers (aged 18-28). Pupil diameter was measured throughout, as an index of cognitive effort. We compared activity to RVIP task hits to hits on a control task (with similar visual parameters and response requirements but no working memory load): this contrast showed activity in medial frontal, inferior and superior parietal, temporal and visual cortices, consistent with previous work, demonstrating that meaningful neural data can be extracted from the RVIP task over an extended interval and using an event-related design. Behavioural performance was not affected by genotype; however, a genotype by condition (experimental task/control task) interaction on pupil diameter suggested that ε4 carriers deployed more effort to the experimental compared to the control task. fMRI results showed a condition by genotype interaction in the right hippocampal formation: only ε4 carriers showed downregulation of this region to experimental task hits versus control task hits. Experimental task beta values were correlated against hit rate: parietal correlations were seen in ε4 carriers only, frontal correlations in non-carriers only. The data indicate that, in the absence of behavioural differences, young adult ε4 carriers already show a different linkage between functional brain activity and behaviour, as well as aberrant hippocampal recruitment patterns. This may have relevance for genotype differences in cognitive ageing trajectories.
Perceptual training yields rapid improvements in visually impaired youth.
Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje
2016-11-30
Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.
Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles
Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.
2014-01-01
We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845
TVA-based assessment of visual attentional functions in developmental dyslexia
Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca
2014-01-01
There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129
Vitu, Françoise; Engbert, Ralf; Kliegl, Reinhold
2016-01-01
Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task. PMID:27658191
Selective involvement of superior frontal cortex during working memory for shapes.
Yee, Lydia T S; Roe, Katherine; Courtney, Susan M
2010-01-01
A spatial/nonspatial functional dissociation between the dorsal and ventral visual pathways is well established and has formed the basis of domain-specific theories of prefrontal cortex (PFC). Inconsistencies in the literature regarding prefrontal organization, however, have led to questions regarding whether the nature of the dissociations observed in PFC during working memory are equivalent to those observed in the visual pathways for perception. In particular, the dissociation between dorsal and ventral PFC during working memory for locations versus object identities has been clearly present in some studies but not in others, seemingly in part due to the type of objects used. The current study compared functional MRI activation during delayed-recognition tasks for shape or color, two object features considered to be processed by the ventral pathway for perceptual recognition. Activation for the shape-delayed recognition task was greater than that for the color task in the lateral occipital cortex, in agreement with studies of visual perception. Greater memory-delay activity was also observed, however, in the parietal and superior frontal cortices for the shape than for the color task. Activity in superior frontal cortex was associated with better performance on the shape task. Conversely, greater delay activity for color than for shape was observed in the left anterior insula and this activity was associated with better performance on the color task. These results suggest that superior frontal cortex contributes to performance on tasks requiring working memory for object identities, but it represents different information about those objects than does the ventral frontal cortex.
Dementia alters standing postural adaptation during a visual search task in older adult men.
Jor'dan, Azizah J; McCarten, J Riley; Rottunda, Susan; Stoffregen, Thomas A; Manor, Brad; Wade, Michael G
2015-04-23
This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance--in the non-dementia group only--suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus, appears to disrupt this perception-action synergy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Processing of pitch and location in human auditory cortex during visual and auditory tasks.
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Processing of pitch and location in human auditory cortex during visual and auditory tasks
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand. PMID:26594185
Bonin, Tanor; Smilek, Daniel
2016-04-01
We evaluated whether task-irrelevant inharmonic music produces greater interference with cognitive performance than task-irrelevant harmonic music. Participants completed either an auditory (Experiment 1) or a visual (Experiment 2) version of the cognitively demanding 2-back task in which they were required to categorize each digit in a sequence of digits as either being a target (a digit also presented two positions earlier in the sequence) or a distractor (all other items). They were concurrently exposed to either task-irrelevant harmonic music (judged to be consonant), task-irrelevant inharmonic music (judged to be dissonant), or no music at all as a distraction. The main finding across both experiments was that performance on the 2-back task was worse when participants were exposed to inharmonic music than when they were exposed to harmonic music. Interestingly, performance on the 2-back task was generally the same regardless of whether harmonic music or no music was played. We suggest that inharmonic, dissonant music interferes with cognitive performance by requiring greater cognitive processing than harmonic, consonant music, and speculate about why this might be.
Qureshi, Adam W; Apperly, Ian A; Samson, Dana
2010-11-01
Previous research suggests that perspective-taking and other "theory of mind" processes may be cognitively demanding for adult participants, and may be disrupted by concurrent performance of a secondary task. In the current study, a Level-1 visual perspective task was administered to 32 adults using a dual-task paradigm in which the secondary task tapped executive function. Results suggested that the secondary task did not affect the calculation of perspective, but did affect the selection of the relevant (Self or Other) perspective for a given trial. This is the first direct evidence of a cognitively efficient process for "theory of mind" in adults that operates independently of executive function. The contrast between this and previous findings points to a distinction between simple perspective-taking and the more complex and cognitively demanding abilities more typically examined in studies of "theory of mind". It is suggested that these findings may provide a parsimonious explanation of the success of infants on 'indirect' measures of perspective-taking that do not explicitly require selection of the relevant perspective. Copyright © 2010 Elsevier B.V. All rights reserved.
Seemüller, Anna; Fiehler, Katja; Rösler, Frank
2011-01-01
The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.
Defever, Emmy; Reynvoet, Bert; Gebuis, Titia
2013-10-01
Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.
Investigation of outside visual cues required for low speed and hover
NASA Technical Reports Server (NTRS)
Hoh, R. H.
1985-01-01
Knowledge of the visual cues required in the performance of stabilized hover in VTOL aircraft is a prerequisite for the development of both cockpit displays and ground-based simulation systems. Attention is presently given to the viability of experimental test flight techniques as the bases for the identification of essential external cues in aggressive and precise low speed and hovering tasks. The analysis and flight test program conducted employed a helicopter and a pilot wearing lenses that could be electronically fogged, where the primary variables were field-of-view, large object 'macrotexture', and fine detail 'microtexture', in six different fields-of-view. Fundamental metrics are proposed for the quantification of the visual field, to allow comparisons between tests, simulations, and aircraft displays.
Visual search in a forced-choice paradigm
NASA Technical Reports Server (NTRS)
Holmgren, J. E.
1974-01-01
The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.
Visual attention shifting in autism spectrum disorders.
Richard, Annette E; Lajiness-O'Neill, Renee
2015-01-01
Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be related to restricted and repetitive behaviors in these disorders.
Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2011-10-01
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes
NASA Technical Reports Server (NTRS)
Remington, R.; Williams, D.
1984-01-01
Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.
Retinotopic memory is more precise than spatiotopic memory.
Golomb, Julie D; Kanwisher, Nancy
2012-01-31
Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.
Laiacona, M; Barbarotto, R; Capitani, E
1993-12-01
We report two head-injured patients whose knowledge of living things was selectively disrupted. Their semantic knowledge was tested with naming and verbal comprehension tasks and a verbal questionnaire. In all of them there was consistent evidence that knowledge of living things was impaired and that of non-living things was relatively preserved. The living things deficit emerged irrespective of whether the question tapped associative or perceptual knowledge or required visual or non visual information. In all tasks the category effect was still significant after the influence on the performance of the following variables was partialled out: word frequency, concept familiarity, prototypicality, name agreement, image agreement and visual complexity. In the verbal questionnaire dissociations were still significant even after adjustment for the difficulty of questions for normals, that had proven greater for living things. Besides diffuse brain damage, both patients presented with a left posterior temporo-parietal lesion.
Brockmole, James R; Boot, Walter R
2009-06-01
Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience. (c) 2009 APA, all rights reserved.
Oei, Adam C; Patterson, Michael D
2015-01-01
Despite increasing evidence that shows action video game play improves perceptual and cognitive skills, the mechanisms of transfer are not well-understood. In line with previous work, we suggest that transfer is dependent upon common demands between the game and transfer task. In the current study, participants played one of four action games with varying speed, visual, and attentional demands for 20 h. We examined whether training enhanced performance for attentional blink, selective attention, attending to multiple items, visual search and auditory detection. Non-gamers who played the game (Modern Combat) with the highest demands showed transfer to tasks of attentional blink and attending to multiple items. The game (MGS Touch) with fewer attentional demands also decreased attentional blink, but to a lesser degree. Other games failed to show transfer, despite having many action game characteristics but at a reduced intensity. The results support the common demands hypothesis.
The Field of View is More Useful in Golfers than Regular Exercisers
Murphy, Karen
2017-01-01
Superior visual attention skills are vital for excellent sports performance. This study used a cognitive skills approach to examine expert and novice differences in a visual spatial attention task. Thirty-two males aged 18 to 42 years completed this study in return for course credit or monetary incentive. Participants were expert golfers (N = 18) or exercise controls (N = 14). Spatial attention was assessed using the useful field of view task which required participants to locate a target shown 10°, 20°, and 30° of eccentricity from centre in very brief presentations. At each degree of eccentricity, golfers were more accurate at locating the target than the exercise controls. These results provide support for the broad transfer hypothesis by demonstrating a link between golf expertise and better performance on an objective measure of spatial attention skills. Therefore, it appears that sports expertise can transfer to expertise in non-sport related tasks. PMID:28450973
Oei, Adam C.; Patterson, Michael D.
2015-01-01
Despite increasing evidence that shows action video game play improves perceptual and cognitive skills, the mechanisms of transfer are not well-understood. In line with previous work, we suggest that transfer is dependent upon common demands between the game and transfer task. In the current study, participants played one of four action games with varying speed, visual, and attentional demands for 20 h. We examined whether training enhanced performance for attentional blink, selective attention, attending to multiple items, visual search and auditory detection. Non-gamers who played the game (Modern Combat) with the highest demands showed transfer to tasks of attentional blink and attending to multiple items. The game (MGS Touch) with fewer attentional demands also decreased attentional blink, but to a lesser degree. Other games failed to show transfer, despite having many action game characteristics but at a reduced intensity. The results support the common demands hypothesis. PMID:25713551
Observers' cognitive states modulate how visual inputs relate to gaze control.
Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G
2016-09-01
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Testing the distinctiveness of visual imagery and motor imagery in a reach paradigm.
Gabbard, Carl; Ammar, Diala; Cordova, Alberto
2009-01-01
We examined the distinctiveness of motor imagery (MI) and visual imagery (VI) in the context of perceived reachability. The aim was to explore the notion that the two visual modes have distinctive processing properties tied to the two-visual-system hypothesis. The experiment included an interference tactic whereby participants completed two tasks at the same time: a visual or motor-interference task combined with a MI or VI-reaching task. We expected increased error would occur when the imaged task and the interference task were matched (e.g., MI with the motor task), suggesting an association based on the assumption that the two tasks were in competition for space on the same processing pathway. Alternatively, if there were no differences, dissociation could be inferred. Significant increases in the number of errors were found when the modalities for the imaged (both MI and VI) task and the interference task were matched. Therefore, it appears that MI and VI in the context of perceived reachability recruit different processing mechanisms.
Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark
2008-07-01
The maintenance of visual stimuli across a delay interval in working memory tasks is thought to involve reverberant neural communication between the prefrontal cortex and posterior visual association areas. Recent studies suggest that the hippocampus might also contribute to this retention process, presumably via reciprocal interactions with visual regions. To characterize the nature of these interactions, we performed functional connectivity analysis on an event-related functional magnetic resonance imaging data set in which participants performed a delayed face recognition task. As the number of faces that participants were required to remember was parametrically increased, the right inferior frontal gyrus (IFG) showed a linearly decreasing degree of functional connectivity with the fusiform face area (FFA) during the delay period. In contrast, the hippocampus linearly increased its delay period connectivity with both the FFA and the IFG as the mnemonic load increased. Moreover, the degree to which participants' FFA showed a load-dependent increase in its connectivity with the hippocampus predicted the degree to which its connectivity with the IFG decreased with load. Thus, these neural circuits may dynamically trade off to accommodate the particular mnemonic demands of the task, with IFG-FFA interactions mediating maintenance at lower loads and hippocampal interactions supporting retention at higher loads.
Foveated model observers to predict human performance in 3D images
NASA Astrophysics Data System (ADS)
Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.
2017-03-01
We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.
Preattentive binding of auditory and visual stimulus features.
Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo
2005-02-01
We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.
Synthetic perspective optical flow: Influence on pilot control tasks
NASA Technical Reports Server (NTRS)
Bennett, C. Thomas; Johnson, Walter W.; Perrone, John A.; Phatak, Anil V.
1989-01-01
One approach used to better understand the impact of visual flow on control tasks has been to use synthetic perspective flow patterns. Such patterns are the result of apparent motion across a grid or random dot display. Unfortunately, the optical flow so generated is based on a subset of the flow information that exists in the real world. The danger is that the resulting optical motions may not generate the visual flow patterns useful for actual flight control. Researchers conducted a series of studies directed at understanding the characteristics of synthetic perspective flow that support various pilot tasks. In the first of these, they examined the control of altitude over various perspective grid textures (Johnson et al., 1987). Another set of studies was directed at studying the head tracking of targets moving in a 3-D coordinate system. These studies, parametric in nature, utilized both impoverished and complex virtual worlds represented by simple perspective grids at one extreme, and computer-generated terrain at the other. These studies are part of an applied visual research program directed at understanding the design principles required for the development of instruments displaying spatial orientation information. The experiments also highlight the need for modeling the impact of spatial displays on pilot control tasks.
Electrophysiological measurement of interest during walking in a simulated environment.
Takeda, Yuji; Okuma, Takashi; Kimura, Motohiro; Kurata, Takeshi; Takenaka, Takeshi; Iwaki, Sunao
2014-09-01
A reliable neuroscientific technique for objectively estimating the degree of interest in a real environment is currently required in the research fields of neuroergonomics and neuroeconomics. Toward the development of such a technique, the present study explored electrophysiological measures that reflect an observer's interest in a nearly-real visual environment. Participants were asked to walk through a simulated shopping mall and the attractiveness of the shopping mall was manipulated by opening and closing the shutters of stores. During the walking task, participants were exposed to task-irrelevant auditory probes (two-stimulus oddball sequence). The results showed a smaller P2/early P3a component of task-irrelevant auditory event-related potentials and a larger lambda response of eye-fixation-related potentials in an interesting environment (i.e., open-shutter condition) than in a boring environment (i.e., closed-shutter condition); these findings can be reasonably explained by supposing that participants allocated more attentional resources to visual information in an interesting environment than in a boring environment, and thus residual attentional resources that could be allocated to task-irrelevant auditory probes were reduced. The P2/early P3a component and the lambda response may be useful measures of interest in a real visual environment. Copyright © 2014 Elsevier B.V. All rights reserved.
Patel, Jigna; Qiu, Qinyin; Yarossi, Mathew; Merians, Alma; Massood, Supriya; Tunik, Eugene; Adamovich, Sergei; Fluet, Gerard
2017-07-01
Explore the potential benefits of using priming methods prior to an active hand task in the acute phase post-stroke in persons with severe upper extremity hemiparesis. Five individuals were trained using priming techniques including virtual reality (VR) based visual mirror feedback and contralaterally controlled passive movement strategies prior to training with an active pinch force modulation task. Clinical, kinetic, and neurophysiological measurements were taken pre and post the training period. Clinical measures were taken at six months post training. The two priming simulations and active training were well tolerated early after stroke. Priming effects were suggested by increased maximal pinch force immediately after visual and movement based priming. Despite having no clinically observable movement distally, the subjects were able to volitionally coordinate isometric force and muscle activity (EMG) in a pinch tracing task. The Root Mean Square Error (RMSE) of force during the pinch trace task gradually decreased over the training period suggesting learning may have occurred. Changes in motor cortical neurophysiology were seen in the unaffected hemisphere using Transcranial Magnetic Stimulation (TMS) mapping. Significant improvements in motor recovery as measured by the Action Research Arm Test (ARAT) and the Upper Extremity Fugl Meyer Assessment (UEFMA) were demonstrated at six months post training by three of the five subjects. This study suggests that an early hand-based intervention using visual and movement based priming activities and a scaled motor task allows participation by persons without the motor control required for traditionally presented rehabilitation and testing. Implications for Rehabilitation Rehabilitation of individuals with severely paretic upper extremities after stroke is challenging due to limited movement capacity and few options for therapeutic training. Long-term functional recovery of the arm after stroke depends on early return of active hand control, establishing a need for acute training methods focused distally. This study demonstrates the feasibility of an early hand-based intervention using virtual reality based priming and scaled motor activities which can allow for participation by persons without the motor control required for traditionally presented rehabilitation and testing.
Sound segregation via embedded repetition is robust to inattention.
Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria
2016-03-01
The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).
Semantic Neighborhood Effects for Abstract versus Concrete Words
Danguecan, Ashley N.; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422
Semantic Neighborhood Effects for Abstract versus Concrete Words.
Danguecan, Ashley N; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.
The impact of representation format and task instruction on student understanding in science
NASA Astrophysics Data System (ADS)
Stephenson, Susan Raatz
The purpose of this study is to examine how representation format and task instructions impact student learning in a science domain. Learning outcomes were assessed via measures of mental model, declarative knowledge, and knowledge inference. Students were asked to use one of two forms of representation, either drawing or writing, during study of a science text. Further, instructions (summarize vs. explain) were varied to determine if students' intended use of the presentation influenced learning. Thus, this study used a 2 (drawing vs. writing) X 2 (summarize vs. explain) between-subjects design. Drawing was hypothesized to require integration across learning materials regardless of task instructions, because drawings (by definition) require learners to integrate new information into a visual representation. Learning outcomes associated with writing were hypothesized to depend upon task instructions: when asked to summarize, writing should result in reproduction of text; when asked to explain, writing should emphasize integration processes. Because integration processes require connecting and analyzing new and prior information, it also was predicted that drawing (across both conditions of task instructions) and writing (when combined the explain task instructions only) would result in increased metacognitive monitoring. Metacognitive monitoring was assessed indirectly via responses to metacognitive prompts interspersed throughout the study.
Visalli, Antonino; Vallesi, Antonino
2018-01-01
Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target) while in the other block the target was a Smurfette doll (neutral target). The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages. PMID:29497392
Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.
Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li
2011-04-01
New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.
SUMO: operation and maintenance management web tool for astronomical observatories
NASA Astrophysics Data System (ADS)
Mujica-Alvarez, Emma; Pérez-Calpena, Ana; García-Vargas, María. Luisa
2014-08-01
SUMO is an Operation and Maintenance Management web tool, which allows managing the operation and maintenance activities and resources required for the exploitation of a complex facility. SUMO main capabilities are: information repository, assets and stock control, tasks scheduler, executed tasks archive, configuration and anomalies control and notification and users management. The information needed to operate and maintain the system must be initially stored at the tool database. SUMO shall automatically schedule the periodical tasks and facilitates the searching and programming of the non-periodical tasks. Tasks planning can be visualized in different formats and dynamically edited to be adjusted to the available resources, anomalies, dates and other constrains that can arise during daily operation. SUMO shall provide warnings to the users notifying potential conflicts related to the required personal availability or the spare stock for the scheduled tasks. To conclude, SUMO has been designed as a tool to help during the operation management of a scientific facility, and in particular an astronomical observatory. This is done by controlling all operating parameters: personal, assets, spare and supply stocks, tasks and time constrains.
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
Synchronization of spontaneous eyeblinks while viewing video stories
Nakano, Tamami; Yamamoto, Yoshiharu; Kitajo, Keiichi; Takahashi, Toshimitsu; Kitazawa, Shigeru
2009-01-01
Blinks are generally suppressed during a task that requires visual attention and tend to occur immediately before or after the task when the timing of its onset and offset are explicitly given. During the viewing of video stories, blinks are expected to occur at explicit breaks such as scene changes. However, given that the scene length is unpredictable, there should also be appropriate timing for blinking within a scene to prevent temporal loss of critical visual information. Here, we show that spontaneous blinks were highly synchronized between and within subjects when they viewed the same short video stories, but were not explicitly tied to the scene breaks. Synchronized blinks occurred during scenes that required less attention such as at the conclusion of an action, during the absence of the main character, during a long shot and during repeated presentations of a similar scene. In contrast, blink synchronization was not observed when subjects viewed a background video or when they listened to a story read aloud. The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events. PMID:19640888
The effect of response mode on lateralized lexical decision performance.
Weems, Scott A; Zaidel, Eran
2005-01-01
We examined the effect of manipulations of response programming, i.e. post-lexical decision making requirements, on lateralized lexical decision. Although response hand manipulations tend to elicit weaker laterality effects than those involving visual field of presentation, the implementation of different lateralized response strategies remains relatively unexplored. Four different response conditions were compared in a between-subjects design: (1) unimanual, (2) bimanual, (3) congruent visual field/response hand, and (4) confounded response hand/target lexicality response. It was observed that hemispheric specialization and interaction effects during the lexical decision task remained unchanged despite the very different response requirements. However, a priori examination of each condition revealed that some manipulations yielded a reduced power to detect laterality effects. The consistent observation of left hemisphere specialization, and both left and right hemisphere lexicality priming effects (interhemispheric transfer), indicate that these effects are relatively robust and unaffected by late occurring processes in the lexical decision task. It appears that the lateralized response mode neither determines nor reflects the laterality of decision processes. In contrast, the target visual half-field is critical for determining the deciding hemisphere and is a sensitive index of hemispheric specialization, as well as of directional interhemispheric transfer.
Shielding cognition from nociception with working memory.
Legrain, Valéry; Crombez, Geert; Plaghki, Léon; Mouraux, André
2013-01-01
Because pain often signals the occurrence of potential tissue damage, nociceptive stimuli have the capacity to capture attention and interfere with ongoing cognitive activities. Working memory is known to guide the orientation of attention by maintaining goal priorities active during the achievement of a task. This study investigated whether the cortical processing of nociceptive stimuli and their ability to capture attention are under the control of working memory. Event-related brain potentials (ERPs) were recorded while participants performed primary tasks on visual targets that required or did not require rehearsal in working memory (1-back vs 0-back conditions). The visual targets were shortly preceded by task-irrelevant tactile stimuli. Occasionally, in order to distract the participants, the tactile stimuli were replaced by novel nociceptive stimuli. In the 0-back conditions, task performance was disrupted by the occurrence of the nociceptive distracters, as reflected by the increased reaction times in trials with novel nociceptive distracters as compared to trials with standard tactile distracters. In the 1-back conditions, such a difference disappeared suggesting that attentional capture and task disruption induced by nociceptive distracters were suppressed by working memory, regardless of task demands. Most importantly, in the conditions involving working memory, the magnitude of nociceptive ERPs, including ERP components at early latency, were significantly reduced. This indicates that working memory is able to modulate the cortical processing of nociceptive input already at its earliest stages, and could explain why working memory reduces consequently ability of nociceptive stimuli to capture attention and disrupt performance of the primary task. It is concluded that protecting cognitive processing against pain interference is best guaranteed by keeping out of working memory pain-related information. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stimulation of the substantia nigra influences the specification of memory-guided saccades
Mahamed, Safraaz; Garrison, Tiffany J.; Shires, Joel
2013-01-01
In the absence of sensory information, we rely on past experience or memories to guide our actions. Because previous experimental and clinical reports implicate basal ganglia nuclei in the generation of movement in the absence of sensory stimuli, we ask here whether one output nucleus of the basal ganglia, the substantia nigra pars reticulata (nigra), influences the specification of an eye movement in the absence of sensory information to guide the movement. We manipulated the level of activity of neurons in the nigra by introducing electrical stimulation to the nigra at different time intervals while monkeys made saccades to different locations in two conditions: one in which the target location remained visible and a second in which the target location appeared only briefly, requiring information stored in memory to specify the movement. Electrical manipulation of the nigra occurring during the delay period of the task, when information about the target was maintained in memory, altered the direction and the occurrence of subsequent saccades. Stimulation during other intervals of the memory task or during the delay period of the visually guided saccade task had less effect on eye movements. On stimulated trials, and only when the visual stimulus was absent, monkeys occasionally (∼20% of the time) failed to make saccades. When monkeys made saccades in the absence of a visual stimulus, stimulation of the nigra resulted in a rotation of the endpoints ipsilaterally (∼2°) and increased the reaction time of contralaterally directed saccades. When the visual stimulus was present, stimulation of the nigra resulted in no significant rotation and decreased the reaction time of contralaterally directed saccades slightly. Based on these measurements, stimulation during the delay period of the memory-guided saccade task influenced the metrics of saccades much more than did stimulation during the same period of the visually guided saccade task. Because these effects occurred with manipulation of nigral activity well before the initiation of saccades and in trials in which the visual stimulus was absent, we conclude that information from the basal ganglia influences the specification of an action as it is evolving primarily during performance of memory-guided saccades. When visual information is available to guide the specification of the saccade, as occurs during visually guided saccades, basal ganglia information is less influential. PMID:24259551
Neural correlates of auditory recognition memory in the primate dorsal temporal pole
Ng, Chi-Wing; Plakke, Bethany
2013-01-01
Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324
Active visual search in non-stationary scenes: coping with temporal variability and uncertainty
NASA Astrophysics Data System (ADS)
Ušćumlić, Marija; Blankertz, Benjamin
2016-02-01
Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.
Visual scanning behavior and pilot workload
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Tole, J. R.; Stephens, A. T.; Ephrath, A. R.
1982-01-01
This paper describes an experimental paradigm and a set of results which demonstrate a relationship between the level of performance on a skilled man-machine control task, the skill of the operator, the level of mental difficulty induced by an additional task imposed on the basic control task, and visual scanning performance. During a constant, simulated piloting task, visual scanning of instruments was found to vary with the difficulty of a verbal mental loading task. The average dwell time of each fixation on the pilot's primary instrument increased with the estimated skill level of the pilots, with novices being affected by the loading task much more than experts. The results suggest that visual scanning of instruments in a controlled task may be an indicator of both workload and skill.
Patel, Jigna; Qiu, Qinyin; Yarossi, Mathew; Merians, Alma; Massood, Supriya; Tunik, Eugene; Adamovich, Sergei; Fluet, Gerard
2016-01-01
Purpose Explore the potential benefits of using priming methods prior to an active hand task in the acute phase post-stroke in persons with severe upper extremity hemiparesis. Methods Five individuals were trained using priming techniques including virtual reality (VR) based visual mirror feedback and contralaterally controlled passive movement strategies prior to training with an active pinch force modulation task. Clinical, kinetic, and neurophysiological measurements were taken pre and post the training period. Clinical measures were taken at six months post training. Results The two priming simulations and active training were well tolerated early after stroke. Priming effects were suggested by increased maximal pinch force immediately after visual and movement based priming. Despite having no clinically observable movement distally, the subjects were able to volitionally coordinate isometric force and muscle activity (EMG) in a pinch tracing task. The Root Mean Square Error (RMSE) of force during the pinch trace task gradually decreased over the training period suggesting learning may have occurred. Changes in motor cortical neurophysiology were seen in the unaffected hemisphere using Transcranial Magnetic Stimulation (TMS) mapping. Significant improvements in motor recovery as measured by the Action Research Arm Test (ARAT) and the Upper Extremity Fugl Meyer Assessment (UEFMA) were demonstrated at six months post training by three of the five subjects. Conclusion This study suggests that an early hand-based intervention using visual and movement based priming activities and a scaled motor task allows participation by persons without the motor control required for traditionally presented rehabilitation and testing. PMID:27636200
Effects of Individual Differences in Working Memory on Plan Presentational Choices
Tintarev, Nava; Masthoff, Judith
2016-01-01
This paper addresses research questions that are central to the area of visualization interfaces for decision support: (RQ1) whether individual user differences in working memory should be considered when choosing how to present visualizations; (RQ2) how to present the visualization to support effective decision making and processing; and (RQ3) how to evaluate the effectiveness of presentational choices. These questions are addressed in the context of presenting plans, or sequences of actions, to users. The experiments are conducted in several domains, and the findings are relevant to applications such as semi-autonomous systems in logistics. That is, scenarios that require the attention of humans who are likely to be interrupted, and require good performance but are not time critical. Following a literature review of different types of individual differences in users that have been found to affect the effectiveness of presentational choices, we consider specifically the influence of individuals' working memory (RQ1). The review also considers metrics used to evaluate presentational choices, and types of presentational choices considered. As for presentational choices (RQ2), we consider a number of variants including interactivity, aggregation, layout, and emphasis. Finally, to evaluate the effectiveness of plan presentational choices (RQ3) we adopt a layered-evaluation approach and measure performance in a dual task paradigm, involving both task interleaving and evaluation of situational awareness. This novel methodology for evaluating visualizations is employed in a series of experiments investigating presentational choices for a plan. A key finding is that emphasizing steps (by highlighting borders) can improve effectiveness on a primary task, but only when controlling for individual variation in working memory. PMID:27899905
Effects of Individual Differences in Working Memory on Plan Presentational Choices.
Tintarev, Nava; Masthoff, Judith
2016-01-01
This paper addresses research questions that are central to the area of visualization interfaces for decision support: (RQ1) whether individual user differences in working memory should be considered when choosing how to present visualizations; (RQ2) how to present the visualization to support effective decision making and processing; and (RQ3) how to evaluate the effectiveness of presentational choices. These questions are addressed in the context of presenting plans, or sequences of actions, to users. The experiments are conducted in several domains, and the findings are relevant to applications such as semi-autonomous systems in logistics. That is, scenarios that require the attention of humans who are likely to be interrupted, and require good performance but are not time critical. Following a literature review of different types of individual differences in users that have been found to affect the effectiveness of presentational choices, we consider specifically the influence of individuals' working memory (RQ1). The review also considers metrics used to evaluate presentational choices, and types of presentational choices considered. As for presentational choices (RQ2), we consider a number of variants including interactivity, aggregation, layout, and emphasis. Finally, to evaluate the effectiveness of plan presentational choices (RQ3) we adopt a layered-evaluation approach and measure performance in a dual task paradigm, involving both task interleaving and evaluation of situational awareness. This novel methodology for evaluating visualizations is employed in a series of experiments investigating presentational choices for a plan. A key finding is that emphasizing steps (by highlighting borders) can improve effectiveness on a primary task, but only when controlling for individual variation in working memory.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Iconic memory and parietofrontal network: fMRI study using temporal integration.
Saneyoshi, Ayako; Niimi, Ryosuke; Suetsugu, Tomoko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko
2011-08-03
We investigated the neural basis of iconic memory using functional magnetic resonance imaging. The parietofrontal network of selective attention is reportedly relevant to readout from iconic memory. We adopted a temporal integration task that requires iconic memory but not selective attention. The results showed that the task activated the parietofrontal network, confirming that the network is involved in readout from iconic memory. We further tested a condition in which temporal integration was performed by visual short-term memory but not by iconic memory. However, no brain region revealed higher activation for temporal integration by iconic memory than for temporal integration by visual short-term memory. This result suggested that there is no localized brain region specialized for iconic memory per se.
Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.
Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald
2016-01-01
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".
Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians
Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald
2016-01-01
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330
Psychological Issues in Online Adaptive Task Allocation
NASA Technical Reports Server (NTRS)
Morris, N. M.; Rouse, W. B.; Ward, S. L.; Frey, P. R.
1984-01-01
Adaptive aiding is an idea that offers potential for improvement over many current approaches to aiding in human-computer systems. The expected return of tailoring the system to fit the user could be in the form of improved system performance and/or increased user satisfaction. Issues such as the manner in which information is shared between human and computer, the appropriate division of labor between them, and the level of autonomy of the aid are explored. A simulated visual search task was developed. Subjects are required to identify targets in a moving display while performing a compensatory sub-critical tracking task. By manipulating characteristics of the situation such as imposed task-related workload and effort required to communicate with the computer, it is possible to create conditions in which interaction with the computer would be more or less desirable. The results of preliminary research using this experimental scenario are presented, and future directions for this research effort are discussed.
Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
Chen, Yi-Chuan; Spence, Charles
2011-10-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
Memory-guided force control in healthy younger and older adults.
Neely, Kristina A; Samimy, Shaadee; Blouch, Samantha L; Wang, Peiyuan; Chennavasin, Amanda; Diaz, Michele T; Dennis, Nancy A
2017-08-01
Successful performance of a memory-guided motor task requires participants to store and then recall an accurate representation of the motor goal. Further, participants must monitor motor output to make adjustments in the absence of visual feedback. The goal of this study was to examine memory-guided grip force in healthy younger and older adults and compare it to performance on behavioral tasks of working memory. Previous work demonstrates that healthy adults decrease force output as a function of time when visual feedback is not available. We hypothesized that older adults would decrease force output at a faster rate than younger adults, due to age-related deficits in working memory. Two groups of participants, younger adults (YA: N = 32, mean age 21.5 years) and older adults (OA: N = 33, mean age 69.3 years), completed four 20-s trials of isometric force with their index finger and thumb, equal to 25% of their maximum voluntary contraction. In the full-vision condition, visual feedback was available for the duration of the trial. In the no vision condition, visual feedback was removed for the last 12 s of each trial. Participants were asked to maintain constant force output in the absence of visual feedback. Participants also completed tasks of word recall and recognition and visuospatial working memory. Counter to our predictions, when visual feedback was removed, younger adults decreased force at a faster rate compared to older adults and the rate of decay was not associated with behavioral performance on tests of working memory.
Dinka, David; Nyce, James M; Timpka, Toomas
2009-06-01
The aim of this study was to investigate how the clinical use of visualization technology can be advanced by the application of a situated cognition perspective. The data were collected in the GammaKnife radiosurgery setting and analyzed using qualitative methods. Observations and in-depth interviews with neurosurgeons and physicists were performed at three clinics using the Leksell GammaKnife. The users' ability to perform cognitive tasks was found to be reduced each time visualizations incongruent with the particular user's perception of clinical reality were used. The main issue here was a lack of transparency, i.e. a black box problem where machine representations "stood between" users and the cognitive tasks they wanted to perform. For neurosurgeons, transparency meant their previous experience from traditional surgery could be applied, i.e. that they were not forced to perform additional cognitive work. From the view of the physicists, on the other hand, the concept of transparency was associated with mathematical precision and avoiding creating a cognitive distance between basic patient data and what is experienced as clinical reality. The physicists approached clinical visualization technology as though it was a laboratory apparatus--one that required continual adjustment and assessment in order to "capture" a quantitative clinical reality. Designers of visualization technology need to compare the cognitive interpretations generated by the new visualization systems to conceptions generated during "traditional" clinical work. This means that the viewpoint of different clinical user groups involved in a given clinical task would have to be taken into account as well. A way forward would be to acknowledge that visualization is a socio-cognitive function that has practice-based antecedents and consequences, and to reconsider what analytical and scientific challenges this presents us with.
Advanced Video Activity Analytics (AVAA): Human Factors Evaluation
2015-05-01
video, and 3) creating and saving annotations (Fig. 11). (The logging program was updated after the pilot to also capture search clicks.) Playing and... visual search task and the auditory task together and thus automatically focused on the visual task. Alternatively, the operator may have intentionally...affect performance on the primary task; however, in the current test there was no apparent effect on the operator’s performance in the visual search task
A working memory bias for alcohol-related stimuli depends on drinking score.
Kessler, Klaus; Pajak, Katarzyna Malgorzata; Harkin, Ben; Jones, Barry
2013-03-01
We tested 44 participants with respect to their working memory (WM) performance on alcohol-related versus neutral visual stimuli. Previously an alcohol attentional bias (AAB) had been reported using these stimuli, where the attention of frequent drinkers was automatically drawn toward alcohol-related items (e.g., beer bottle). The present study set out to provide evidence for an alcohol memory bias (AMB) that would persist over longer time-scales than the AAB. The WM task we used required memorizing 4 stimuli in their correct locations and a visual interference task was administered during a 4-sec delay interval. A subsequent probe required participants to indicate whether a stimulus was shown in the correct or incorrect location. For each participant we calculated a drinking score based on 3 items derived from the Alcohol Use Questionnaire, and we observed that higher scorers better remembered alcohol-related images compared with lower scorers, particularly when these were presented in their correct locations upon recall. This provides first evidence for an AMB. It is important to highlight that this effect persisted over a 4-sec delay period including a visual interference task that erased iconic memories and diverted attention away from the encoded items, thus the AMB cannot be reduced to the previously reported AAB. Our finding calls for further investigation of alcohol-related cognitive biases in WM, and we propose a preliminary model that may guide future research. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents.
Manjaly, Zina M; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E; Brieber, Sarah; Marshall, John C; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R
2007-03-01
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is "weak central coherence", i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients.
Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents
Manjaly, Zina M.; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E.; Brieber, Sarah; Marshall, John C.; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R.
2007-01-01
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is “weak central coherence”, i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients. PMID:17240169
An overview of 3D software visualization.
Teyseyre, Alfredo R; Campo, Marcelo R
2009-01-01
Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.
Hindi Attar, Catherine; Andersen, Søren K; Müller, Matthias M
2010-12-01
Selective attention to a primary task can be biased by the occurrence of emotional distractors that involuntary attract attention due to their intrinsic stimulus significance. What is largely unknown is the time course and magnitude of competitive interactions between a to-be-attended foreground task and emotional distractors. We used pleasant, unpleasant and neutral pictures from the International Affective Picture System (IAPS) that were either presented in intact or phase-scrambled form. Pictures were superimposed by a flickering display of moving random dots, which constituted the primary task and enabled us to record steady-state visual evoked potentials (SSVEPs) as a continuous measure of attentional resource allocation directed to the task. Subjects were required to attend to the dots and to detect short intervals of coherent motion while ignoring the background pictures. We found that pleasant and unpleasant relative to neutral pictures more strongly influenced task-related processing as reflected in a significant decrease in SSVEP amplitudes and target detection rates, both covering a time window of several hundred milliseconds. Strikingly, the effect of semantic relative to phase-scrambled pictures on task-related activity was much larger, emerged earlier and lasted longer in time compared to the specific effect of emotion. The observed differences in size and duration of time courses of semantic and emotional picture processing strengthen the assumption of separate functional mechanisms for both processes rather than a general boosting of neural activity in favor of emotional stimulus processing. Copyright © 2010 Elsevier Inc. All rights reserved.
Boyle, Gregory J; Neumann, David L; Furedy, John J; Westbury, H Rae
2010-04-01
This paper reports sex differences in cognitive task performance that emerged when 39 Australian university undergraduates (19 men, 20 women) were asked to solve verbal (lexical) and visual-spatial cognitive matching tasks which varied in difficulty and visual field of presentation. Sex significantly interacted with task type, task difficulty, laterality, and changes in performance across trials. The results revealed that the significant individual-differences' variable of sex does not always emerge as a significant main effect, but instead in terms of significant interactions with other variables manipulated experimentally. Our results show that sex differences must be taken into account when conducting experiments into human cognitive-task performance.
Lu, Sara A; Wickens, Christopher D; Prinet, Julie C; Hutchins, Shaun D; Sarter, Nadine; Sebok, Angelia
2013-08-01
The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.
A design space of visualization tasks.
Schulz, Hans-Jörg; Nocke, Thomas; Heitzler, Magnus; Schumann, Heidrun
2013-12-01
Knowledge about visualization tasks plays an important role in choosing or building suitable visual representations to pursue them. Yet, tasks are a multi-faceted concept and it is thus not surprising that the many existing task taxonomies and models all describe different aspects of tasks, depending on what these task descriptions aim to capture. This results in a clear need to bring these different aspects together under the common hood of a general design space of visualization tasks, which we propose in this paper. Our design space consists of five design dimensions that characterize the main aspects of tasks and that have so far been distributed across different task descriptions. We exemplify its concrete use by applying our design space in the domain of climate impact research. To this end, we propose interfaces to our design space for different user roles (developers, authors, and end users) that allow users of different levels of expertise to work with it.
Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis
Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.
2016-01-01
Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815
Enhancing cognition with video games: a multiple game training study.
Oei, Adam C; Patterson, Michael D
2013-01-01
Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.
Electrophysiological evidence for right frontal lobe dominance in spatial visuomotor learning.
Lang, W; Lang, M; Kornhuber, A; Kornhuber, H H
1986-02-01
Slow negative potential shifts were recorded together with the error made in motor performance when two different groups of 14 students tracked visual stimuli with their right hand. Various visuomotor tasks were compared. A tracking task (T) in which subjects had to track the stimulus directly, showed no decrease of error in motor performance during the experiment. In a distorted tracking task (DT) a continuous horizontal distortion of the visual feedback had to be compensated. The additional demands of this task required visuomotor learning. Another learning condition was a mirrored-tracking task (horizontally inverted tracking, hIT), i.e. an elementary function, such as the concept of changing left and right was interposed between perception and action. In addition, subjects performed a no-tracking control task (NT) in which they started the visual stimulus without tracking it. A slow negative potential shift was associated with the visuomotor performance (TP: tracking potential). In the learning tasks (DT and hIT) this negativity was significantly enhanced over the anterior midline and in hIT frontally and precentrally over both hemispheres. Comparing hIT and T for every subject, the enhancement of the tracking potential in hIT was correlated with the success in motor learning in frontomedial and bilaterally in frontolateral recordings (r = 0.81-0.88). However, comparing DT and T, such a correlation was only found in frontomedial and right frontolateral electrodes (r = 0.5-0.61), but not at the left frontolateral electrode. These experiments are consistent with previous findings and give further neurophysiological evidence for frontal lobe activity in visuomotor learning. The hemispherical asymmetry is discussed in respect to hemispherical specialization (right frontal lobe dominance in spatial visuomotor learning).
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Chung, Kevin K. H.
2015-01-01
This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Degraded visual environment image/video quality metrics
NASA Astrophysics Data System (ADS)
Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.
2014-06-01
A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.
Influence of social presence on eye movements in visual search tasks.
Liu, Na; Yu, Ruifeng
2017-12-01
This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.
Guidance for Development of a Flight Simulator Specification
2007-05-01
the simulated line of sight to the moon is less than one degree, and that the moon appears to move smoothly across the visual scene. The phase of the...Agencies have adopted the definition used by Optics Companies (this definition has also been adopted in this revision of the Air Force Guide...simulators that require tracking the target as it slues across the displayed scene, such as with air -to-ground or air -to- air combat tasks. Visual systems
Jackson, Margaret C.; Linden, David E. J.; Raymond, Jane E.
2012-01-01
We are often required to filter out distraction in order to focus on a primary task during which working memory (WM) is engaged. Previous research has shown that negative versus neutral distracters presented during a visual WM maintenance period significantly impair memory for neutral information. However, the contents of WM are often also emotional in nature. The question we address here is how incidental information might impact upon visual WM when both this and the memory items contain emotional information. We presented emotional versus neutral words during the maintenance interval of an emotional visual WM faces task. Participants encoded two angry or happy faces into WM, and several seconds into a 9 s maintenance period a negative, positive, or neutral word was flashed on the screen three times. A single neutral test face was presented for retrieval with a face identity that was either present or absent in the preceding study array. WM for angry face identities was significantly better when an emotional (negative or positive) versus neutral (or no) word was presented. In contrast, WM for happy face identities was not significantly affected by word valence. These findings suggest that the presence of emotion within an intervening stimulus boosts the emotional value of threat-related information maintained in visual WM and thus improves performance. In addition, we show that incidental events that are emotional in nature do not always distract from an ongoing WM task. PMID:23112782
Grasping with the eyes of your hands: hapsis and vision modulate hand preference.
Stone, Kayla D; Gonzalez, Claudia L R
2014-02-01
Right-hand preference has been demonstrated for visually guided reaching and grasping. Grasping, however, requires the integration of both visual and haptic cues. To what extent does vision influence hand preference for grasping? Is there a hand preference for haptically guided grasping? Two experiments were designed to address these questions. In Experiment 1, individuals were tested in a reaching-to-grasp task with vision (sighted condition) and with hapsis (blindfolded condition). Participants were asked to put together 3D models using building blocks scattered on a tabletop. The models were simple, composed of ten blocks of three different shapes. Starting condition (Vision-First or Hapsis-First) was counterbalanced among participants. Right-hand preference was greater in visually guided grasping but only in the Vision-First group. Participants who initially built the models while blindfolded (Hapsis-First group) used their right hand significantly less for the visually guided portion of the task. To investigate whether grasping using hapsis modifies subsequent hand preference, participants received an additional haptic experience in a follow-up experiment. While blindfolded, participants manipulated the blocks in a container for 5 min prior to the task. This additional experience did not affect right-hand use on visually guided grasping but had a robust effect on haptically guided grasping. Together, the results demonstrate first that hand preference for grasping is influenced by both vision and hapsis, and second, they highlight how flexible this preference could be when modulated by hapsis.
Ueki, Yoshino; Mima, Tatsuya; Nakamura, Kimihiro; Oga, Tatsuhide; Shibasaki, Hiroshi; Nagamine, Takashi; Fukuyama, Hidenao
2006-08-16
The Japanese writing system is unique in that it is composed of two different orthographies: kanji (morphograms) and kana (syllabograms). The retrieval of the visual orthographic representations of Japanese kanji is crucial to the process of writing in Japanese. We used low-frequency repetitive transcranial magnetic stimulation (rTMS) to clarify the functional relevance of the left and right posterior inferior temporal cortex (PITC) to this process in native Japanese speakers. The experimental paradigms included the mental recall of kanji, kana-to-kanji transcription, semantic judgment, oral reading, and copying of kana and kanji. The first two tasks require the visualization of the kanji image of the word. We applied 0.9 Hz rTMS (600 total pulses) over individually determined left or right PITC to suppress cortical activity and measured subsequent task performance. In the mental recall of kanji and kana-to-kanji transcription, rTMS over the left PITC prolonged reaction times (RTs), whereas rTMS over the right PITC reduced RTs. In the other tasks, which do not involve the mental visualization of kanji, rTMS over the left or right PITC had no effect on performance. These results suggest that the left PITC is crucial for the retrieval of the visual graphic representation of kanji. Furthermore, the right PITC may work to suppress the dominant left PITC in the neural network for kanji writing, which involves visual word recognition.
Comparison of helmet-mounted display designs in support of wayfinding
NASA Astrophysics Data System (ADS)
Kumagai, Jason K.; Massel, Lisa; Tack, David; Bossi, Linda
2003-09-01
The Canadian Soldier Information Requirements Technology Demonstration (SIREQ TD) soldier modernization research and development program has conducted experiments to help determine the types and amount of information needed to support wayfinding across a range of terrain environments, the most effective display modality for providing the information (visual, auditory or tactile) that will minimize conflict with other infantry tasks, and to optimize interface design. In this study, seven different visual helmet-mounted display (HMD) designs were developed based on soldier feedback from previous studies. The displays and an in-service compass condition were contrasted to investigate how the visual HMD interfaces influenced navigation performance. Displays varied with respect to their information content, frame of reference, point of view, and display features. Twelve male infantry soldiers used all eight experimental conditions to locate bearings to waypoints. From a constant location, participants were required to face waypoints presented at offset bearings of 25, 65, and 120 degrees. Performance measures included time to identify waypoints, accuracy, and head misdirection errors. Subjective measures of performance included ratings of ease of use, acceptance for land navigation, and mental demand. Comments were collected to identify likes, dislikes and possible improvements required for HMDs. Results underlined the potential performance enhancement of GPS-based navigation with HMDs, the requirement for explicit directional information, the desirability of both analog and digital information, the performance benefits of an egocentric frame of reference, the merit of a forward field of view, and the desirability of a guide to help landmark. Implications for the information requirements and human factors design of HMDs for land-based navigational tasks are discussed.
Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters
Mittag, Maria; Inauri, Karina; Huovilainen, Tatu; Leminen, Miika; Salo, Emma; Rinne, Teemu; Kujala, Teija; Alho, Kimmo
2013-01-01
We used event-related brain potentials (ERPs) to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological (AP) and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement (Pd) of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the AP and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity (RP) to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving left-ear syllables. PMID:24348324
Postural adjustment errors during lateral step initiation in older and younger adults
Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.
2016-01-01
The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953
Postural adjustment errors during lateral step initiation in older and younger adults
Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.
2014-01-01
The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25183162
Reschechtko, Sasha; Zatsiorsky, Vladimir M.; Latash, Mark L.
2016-01-01
Manipulating objects with the hands requires the accurate production of resultant forces including shear forces; effective control of these shear forces also requires the production of internal forces normal to the surface of the object(s) being manipulated. In the present study, we investigated multi-finger synergies stabilizing shear and normal components of force, as well as drifts in both components of force, during isometric pressing tasks requiring a specific magnitude of shear force production. We hypothesized that shear and normal forces would evolve similarly in time, and also show similar stability properties as assessed by the decomposition of inter-trial variance within the uncontrolled manifold hypothesis. Healthy subjects were required to accurately produce total shear and total normal forces with four fingers of the hand during a steady-state force task (with and without visual feedback) and a self-paced force pulse task. The two force components showed similar time profiles during both shear force pulse production and unintentional drift induced by turning the visual feedback off. Only the explicitly instructed components of force, however, were stabilized with multi-finger synergies. No force-stabilizing synergies and no anticipatory synergy adjustments were seen for the normal force in shear force production trials. These unexpected qualitative differences in the control of the two force components – which are produced by some of the same muscles and show high degree of temporal coupling – are interpreted within the theory of control with referent coordinates for salient variables. These observations suggest the existence of two classes of neural variables: one that translates into shifts of referent coordinates and defines changes in magnitude of salient variables, and the other controlling gains in back-coupling loops that define stability of the salient variables. Only the former are shared between the explicit and implicit task components. PMID:27601252
Bokde, Arun L W; Karmann, Michaela; Teipel, Stefan J; Born, Christine; Lieb, Martin; Reiser, Maximilian F; Möller, Hans-Jürgen; Hampel, Harald
2009-04-01
Visual perception has been shown to be altered in Alzheimer disease (AD) patients, and it is associated with decreased cognitive function. Galantamine is an active cholinergic agent, which has been shown to lead to improved cognition in mild to moderate AD patients. This study examined brain activation in a group of mild AD patients after a 3-month open-label treatment with galantamine. The objective was to examine the changes in brain activation due to treatment. There were 2 tasks to visual perception. The first task was a face-matching task to test the activation along the ventral visual pathway, and the second task was a location-matching task to test neuronal function along the dorsal pathway. Brain activation was measured using functional magnetic resonance imaging. There were 5 mild AD patients in the study. There were no differences in the task performance and in the cognitive scores of the Consortium to Establish a Registry for Alzheimer's Disease battery before and after treatment. In the location-matching task, we found a statistically significant decrease in activation along the dorsal visual pathway after galantamine treatment. A previous study found that AD patients had higher activation in the location-matching task compared with healthy controls. There were no differences in activation for the face-matching task after treatment. Our data indicate that treatment with galantamine leads to more efficient visual processing of stimuli or changes the compensatory mechanism in the AD patients. A visual perception task recruiting the dorsal visual system may be useful as a biomarker of treatment effects.
Goal-directed action is automatically biased towards looming motion
Moher, Jeff; Sit, Jonathan; Song, Joo-Hyun
2014-01-01
It is known that looming motion can capture attention regardless of an observer’s intentions. Real-world behavior, however, frequently involves not just attentional selection, but selection for action. Thus, it is important to understand the impact of looming motion on goal-directed action to gain a broader perspective on how stimulus properties bias human behavior. We presented participants with a visually-guided reaching task in which they pointed to a target letter presented among non-target distractors. On some trials, one of the pre-masks at the location of the upcoming search objects grew rapidly in size, creating the appearance of a “looming” target or distractor. Even though looming motion did not predict the target location, the time required to reach to the target was shorter when the target loomed compared to when a distractor loomed. Furthermore, reach movement trajectories were pulled towards the location of a looming distractor when one was present, a pull that was greater still when the looming motion was on a collision path with the participant. We also contrast reaching data with data from a similarly designed visual search task requiring keypress responses. This comparison underscores the sensitivity of visually-guided reaching data, as some experimental manipulations, such as looming motion path, affected reach trajectories but not keypress measures. Together, the results demonstrate that looming motion biases visually-guided action regardless of an observer’s current behavioral goals, affecting not only the time required to reach to targets but also the path of the observer’s hand movement itself. PMID:25159287
Man-in-the-loop study of filtering in airborne head tracking tasks
NASA Technical Reports Server (NTRS)
Lifshitz, S.; Merhav, S. J.
1992-01-01
A human-factors study is conducted of problems due to vibrations during the use of a helmet-mounted display (HMD) in tracking tasks whose major factors are target motion and head vibration. A method is proposed for improving aiming accuracy in such tracking tasks on the basis of (1) head-motion measurement and (2) the shifting of the reticle in the HMD in ways that inhibit much of the involuntary apparent motion of the reticle, relative to the target, and the nonvoluntary motion of the teleoperated device. The HMD inherently furnishes the visual feedback required by this scheme.
Visual scanning behavior and pilot workload
NASA Technical Reports Server (NTRS)
Harris, R. L., Sr.; Tole, J. R.; Stephens, A. T.; Ephrath, A. R.
1981-01-01
An experimental paradigm and a set of results which demonstrate a relationship between the level of performance on a skilled man-machine control task, the skill of the operator, the level of mental difficulty induced by an additional task imposed on the basic control task, and visual scanning performance. During a constant, simulated piloting task, visual scanning of instruments was found to vary as a function of the level of difficulty of a verbal mental loading task. The average dwell time of each fixation on the pilot's primary instrument increased as a function of the estimated skill level of the pilots, with novices being affected by the loading task much more than the experts. The results suggest that visual scanning of instruments in a controlled task may be an indicator of both workload and skill.
The effect of increased monitoring load on vigilance performance using a simulated radar display.
DOT National Transportation Integrated Search
1977-07-01
The present study examined the extent to which level of target density influences the ability to sustain attention to a complex monitoring task requiring only a detection response to simple stimulus change. The visual display was designed to approxim...
Thinking Graphically: Connecting Vision and Cognition during Graph Comprehension
ERIC Educational Resources Information Center
Ratwani, Raj M.; Trafton, J. Gregory; Boehm-Davis, Deborah A.
2008-01-01
Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive…
Eye-Tracking Provides a Sensitive Measure of Exploration Deficits After Acute Right MCA Stroke
Delazer, Margarete; Sojer, Martin; Ellmerer, Philipp; Boehme, Christian; Benke, Thomas
2018-01-01
The eye-tracking study aimed at assessing spatial biases in visual exploration in patients after acute right MCA (middle cerebral artery) stroke. Patients affected by unilateral neglect show less functional recovery and experience severe difficulties in everyday life. Thus, accurate diagnosis is essential, and specific treatment is required. Early assessment is of high importance as rehabilitative interventions are more effective when applied soon after stroke. Previous research has shown that deficits may be overlooked when classical paper-and-pencil tasks are used for diagnosis. Conversely, eye-tracking allows direct monitoring of visual exploration patterns. We hypothesized that the analysis of eye-tracking provides more sensitive measures for spatial exploration deficits after right middle cerebral artery stroke. Twenty-two patients with right MCA stroke (median 5 days after stroke) and 28 healthy controls were included. Lesions were confirmed by MRI/CCT. Groups performed comparably in the Mini–Mental State Examination (patients and controls median 29) and in a screening of executive functions. Eleven patients scored at ceiling in neglect screening tasks, 11 showed minimal to severe signs of unilateral visual neglect. An overlap plot based on MRI and CCT imaging showed lesions in the temporo–parieto–frontal cortex, basal ganglia, and adjacent white matter tracts. Visual exploration was evaluated in two eye-tracking tasks, one assessing free visual exploration of photographs, the other visual search using symbols and letters. An index of fixation asymmetries proved to be a sensitive measure of spatial exploration deficits. Both patient groups showed a marked exploration bias to the right when looking at complex photographs. A single case analysis confirmed that also most of those patients who showed no neglect in screening tasks performed outside the range of controls in free exploration. The analysis of patients’ scoring at ceiling in neglect screening tasks is of special interest, as possible deficits may be overlooked and thus remain untreated. Our findings are in line with other studies suggesting considerable limitations of laboratory screening procedures to fully appreciate the occurrence of neglect symptoms. Future investigations are needed to explore the predictive value of the eye-tracking index and its validity in everyday situations.
Assessment of a head-mounted miniature monitor
NASA Technical Reports Server (NTRS)
Hale, J. P., II
1992-01-01
Two experiments were conducted to assess the capabilities and limitations of the Private Eye, a miniature, head-mounted monitor. The first experiment compared the Private Eye with a cathode ray tube (CRT) and hard copy in both a constrained and unconstrained work envelope. The task was a simulated maintenance and assembly task that required frequent reference to the displayed information. A main effect of presentation media indicated faster placement times using the CRT as compared with hard copy. There were no significant differences between the Private Eye and either the CRT or hard copy for identification, placement, or total task times. The goal of the second experiment was to determine the effects of various local visual parameters on the ability of the user to accurately perceive the information of the Private Eye. The task was an interactive video game. No significant performance differences were found under either bright or dark ambient illumination environments nor with either visually simple or complex task backgrounds. Glare reflected off of the bezel surrounding the monitor did degrade performance. It was concluded that this head-mounted, miniature monitor could serve a useful role for in situ operations, especially in microgravity environments.
Task-relevant perceptual features can define categories in visual memory too.
Antonelli, Karla B; Williams, Carrick C
2017-11-01
Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.
NASA Technical Reports Server (NTRS)
Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.
1992-01-01
Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.
Fradcourt, B; Peyrin, C; Baciu, M; Campagne, A
2013-10-01
Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
BiSet: Semantic Edge Bundling with Biclusters for Sensemaking.
Sun, Maoyuan; Mi, Peng; North, Chris; Ramakrishnan, Naren
2016-01-01
Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, "in-between", to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.
Perceptual training yields rapid improvements in visually impaired youth
Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje
2016-01-01
Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026
Models Extracted from Text for System-Software Safety Analyses
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2010-01-01
This presentation describes extraction and integration of requirements information and safety information in visualizations to support early review of completeness, correctness, and consistency of lengthy and diverse system safety analyses. Software tools have been developed and extended to perform the following tasks: 1) extract model parts and safety information from text in interface requirements documents, failure modes and effects analyses and hazard reports; 2) map and integrate the information to develop system architecture models and visualizations for safety analysts; and 3) provide model output to support virtual system integration testing. This presentation illustrates the methods and products with a rocket motor initiation case.
Dye-enhanced visualization of rat whiskers for behavioral studies.
Rigosa, Jacopo; Lucantonio, Alessandro; Noselli, Giovanni; Fassihi, Arash; Zorzin, Erik; Manzino, Fabrizio; Pulecchi, Francesca; Diamond, Mathew E
2017-06-14
Visualization and tracking of the facial whiskers is required in an increasing number of rodent studies. Although many approaches have been employed, only high-speed videography has proven adequate for measuring whisker motion and deformation during interaction with an object. However, whisker visualization and tracking is challenging for multiple reasons, primary among them the low contrast of the whisker against its background. Here, we demonstrate a fluorescent dye method suitable for visualization of one or more rat whiskers. The process makes the dyed whisker(s) easily visible against a dark background. The coloring does not influence the behavioral performance of rats trained on a vibrissal vibrotactile discrimination task, nor does it affect the whiskers' mechanical properties.
Neuronal effects of auditory distraction on visual attention
Smucny, Jason; Rojas, Donald C.; Eichman, Lindsay C.; Tregellas, Jason R.
2013-01-01
Selective attention in the presence of distraction is a key aspect of healthy cognition. The underlying neurobiological processes, have not, however, been functionally well characterized. In the present study, we used functional magnetic resonance imaging to determine how ecologically relevant distracting noise affects cortical activity in 27 healthy adults during two versions of the visual sustained attention to response task (SART) that differ in difficulty (and thus attentional load). A significant condition (noise or silence) by task (easy or difficult) interaction was observed in several areas, including dorsolateral prefrontal cortex (DLPFC), fusiform gyrus (FG), posterior cingulate (PCC), and pre-supplementary motor area (PreSMA). Post-hoc analyses of interaction effects revealed deactivation of DLPFC, PCC, and PreSMA during distracting noise under conditions of low attentional load, and activation of FG and PCC during distracting noise under conditions of high attentional load. These results suggest that distracting noise may help alert subjects to task goals and reduce demands on cortical resources during tasks of low difficulty and attentional load. Under conditions of higher load, however, additional cognitive resources may be required in the presence of noise. PMID:23291265
A shared representation of order between encoding and recognition in visual short-term memory.
Kalm, Kristjan; Norris, Dennis
2017-07-15
Many complex tasks require people to bind individual events into a sequence that can be held in short term memory (STM). For this purpose information about the order of the individual events in the sequence needs to be maintained in an active and accessible form in STM over a period of few seconds. Here we investigated how the temporal order information is shared between the presentation and response phases of an STM task. We trained a classification algorithm on the fMRI activity patterns from the presentation phase of the STM task to predict the order of the items during the subsequent recognition phase. While voxels in a number of brain regions represented positional information during either presentation and recognition phases, only voxels in the lateral prefrontal cortex (PFC) and the anterior temporal lobe (ATL) represented position consistently across task phases. A shared positional code in the ATL might reflect verbal recoding of visual sequences to facilitate the maintenance of order information over several seconds. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Divided attention disrupts perceptual encoding during speech recognition.
Mattys, Sven L; Palmer, Shekeila D
2015-03-01
Performing a secondary task while listening to speech has a detrimental effect on speech processing, but the locus of the disruption within the speech system is poorly understood. Recent research has shown that cognitive load imposed by a concurrent visual task increases dependency on lexical knowledge during speech processing, but it does not affect lexical activation per se. This suggests that "lexical drift" under cognitive load occurs either as a post-lexical bias at the decisional level or as a secondary consequence of reduced perceptual sensitivity. This study aimed to adjudicate between these alternatives using a forced-choice task that required listeners to identify noise-degraded spoken words with or without the addition of a concurrent visual task. Adding cognitive load increased the likelihood that listeners would select a word acoustically similar to the target even though its frequency was lower than that of the target. Thus, there was no evidence that cognitive load led to a high-frequency response bias. Rather, cognitive load seems to disrupt sublexical encoding, possibly by impairing perceptual acuity at the auditory periphery.
Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-09-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.
Benefits of interhemispheric integration on the Japanese Kana script-matching tasks.
Yoshizaki, K; Tsuji, Y
2000-02-01
We tested Banich's hypothesis that the benefits of bihemispheric processing were enhanced as task complexity increased, when some procedural shortcomings in the previous studies were overcome by using Japanese Kana script-matching tasks. In Exp. 1, the 20 right-handed subjects were given the Physical-Identity task (Katakana-Katakana scripts matching) and the Name-Identity task (Katakana-Hiragana scripts matching). On both tasks, a pair of Kana scripts was tachistoscopically presented in the left, right, and bilateral visual fields. Distractor stimuli were also presented with target Kana scripts on both tasks to equate the processing load between the hemispheres. Analysis showed that, while a bilateral visual-field advantage was found on the name-identity task, a unilateral visual-field advantage was found on the physical-identity task, suggesting that, as the computational complexity of the encoding stage was enhanced, the benefits of bilateral hemispheric processing increased. In Exp. 2, the 16 right-handed subjects were given the same physical-identity task as in Exp. 1, except Hiragana scripts were used as distractors instead of digits to enhance task difficulty. Analysis showed no differences in performance between the unilateral and bilateral visual fields. Taking into account these results of physical-identity tasks for both Exps. 1 and 2, enhancing task demand in the stage of ignoring distractors made the unilateral visual-field advantage obtained in Exp. 1 disappear in Exp. 2. These results supported Banich's hypothesis.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Vision-related problems among the workers engaged in jewellery manufacturing.
Salve, Urmi Ravindra
2015-01-01
American Optometric Association defines Computer Vision Syndrome (CVS) as "complex of eye and vision problems related to near work which are experienced during or related to computer use." This happens when visual demand of the tasks exceeds the visual ability of the users. Even though problems were initially attributed to computer-related activities subsequently similar problems are also reported while carrying any near point task. Jewellery manufacturing activities involves precision designs, setting the tiny metals and stones which requires high visual attention and mental concentration and are often near point task. It is therefore expected that the workers engaged in jewellery manufacturing may also experience symptoms like CVS. Keeping the above in mind, this study was taken up (1) To identify the prevalence of symptoms like CVS among the workers of the jewellery manufacturing and compare the same with the workers working at computer workstation and (2) To ascertain whether such symptoms have any permanent vision-related problems. Case control study. The study was carried out in Zaveri Bazaar region and at an IT-enabled organization in Mumbai. The study involved the identification of symptoms of CVS using a questionnaire of Eye Strain Journal, opthalmological check-ups and measurement of Spontaneous Eye Blink rate. The data obtained from the jewellery manufacturing was compared with the data of the subjects engaged in computer work and with the data available in the literature. A comparative inferential statistics was used. Results showed that visual demands of the task carried out in jewellery manufacturing were much higher than that of carried out in computer-related work.
Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M; Latash, Mark L
2017-02-01
We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force-moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task.
Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M.; Latash, Mark L.
2016-01-01
We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force/moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task. PMID:27785549
Psycho acoustical Measures in Individuals with Congenital Visual Impairment.
Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh
2017-12-01
In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.
Semantic and Visual Memory After Alcohol Abuse.
ERIC Educational Resources Information Center
Donat, Dennis C.
1986-01-01
Compared the relative performance of 40 patients with a history of alcohol abuse on tasks of short-term semantic and visual memory. Performance on the visual memory tasks was impaired significantly relative to the semantic memory task in a within-subjects analysis of variance. Semantic memory was unimpaired. (Author/ABB)
3D Visual Tracking of an Articulated Robot in Precision Automated Tasks
Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.
2017-01-01
The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860
Effects of speech intelligibility level on concurrent visual task performance.
Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J
1994-09-01
Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.
A comparison of tracking with visual and kinesthetic-tactual displays
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.
1981-01-01
Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under appropriate conditions it may be an effective means of providing visual workload relief. In order to better understand how KT tracking differs from visual tracking, both a critical tracking task and stationary single-axis tracking tasks were conducted with and without velocity quickening. On the critical tracking task, the visual displays were superior; however, the KT quickened display was approximately equal to the visual unquickened display. Mean squared error scores in the stationary tracking tasks for the visual and KT displays were approximately equal in the quickened conditions, and the describing functions were very similar. In the unquickened conditions, the visual display was superior. Subjects using the unquickened KT display exhibited a low frequency lead-lag that may be related to sensory adaptation.
Abu-Akel, A; Reniers, R L E P; Wood, S J
2016-09-01
Patients with schizophrenia show impairments in working-memory and visual-spatial processing, but little is known about the dynamic interplay between the two. To provide insight into this important question, we examined the effect of positive and negative symptom expressions in healthy adults on perceptual processing while concurrently performing a working-memory task that requires the allocations of various degrees of cognitive resources. The effect of positive and negative symptom expressions in healthy adults (N = 91) on perceptual processing was examined in a dual-task paradigm of visual-spatial working memory (VSWM) under three conditions of cognitive load: a baseline condition (with no concurrent working-memory demand), a low VSWM load condition, and a high VSWM load condition. Participants overall performed more efficiently (i.e., faster) with increasing cognitive load. This facilitation in performance was unrelated to symptom expressions. However, participants with high-negative, low-positive symptom expressions were less accurate in the low VSWM condition compared to the baseline and the high VSWM load conditions. Attenuated, subclinical expressions of psychosis affect cognitive performance that is impaired in schizophrenia. The "resource limitations hypothesis" may explain the performance of the participants with high-negative symptom expressions. The dual-task of visual-spatial processing and working memory may be beneficial to assessing the cognitive phenotype of individuals with high risk for schizophrenia spectrum disorders.
Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel
2015-01-01
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363
Miller, J
1991-03-01
When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.
The functional neuroanatomy of multitasking: combining dual tasking with a short term memory task.
Deprez, Sabine; Vandenbulcke, Mathieu; Peeters, Ron; Emsell, Louise; Amant, Frederic; Sunaert, Stefan
2013-09-01
Insight into the neural architecture of multitasking is crucial when investigating the pathophysiology of multitasking deficits in clinical populations. Presently, little is known about how the brain combines dual-tasking with a concurrent short-term memory task, despite the relevance of this mental operation in daily life and the frequency of complaints related to this process, in disease. In this study we aimed to examine how the brain responds when a memory task is added to dual-tasking. Thirty-three right-handed healthy volunteers (20 females, mean age 39.9 ± 5.8) were examined with functional brain imaging (fMRI). The paradigm consisted of two cross-modal single tasks (a visual and auditory temporal same-different task with short delay), a dual-task combining both single tasks simultaneously and a multi-task condition, combining the dual-task with an additional short-term memory task (temporal same-different visual task with long delay). Dual-tasking compared to both individual visual and auditory single tasks activated a predominantly right-sided fronto-parietal network and the cerebellum. When adding the additional short-term memory task, a larger and more bilateral frontoparietal network was recruited. We found enhanced activity during multitasking in components of the network that were already involved in dual-tasking, suggesting increased working memory demands, as well as recruitment of multitask-specific components including areas that are likely to be involved in online holding of visual stimuli in short-term memory such as occipito-temporal cortex. These results confirm concurrent neural processing of a visual short-term memory task during dual-tasking and provide evidence for an effective fMRI multitasking paradigm. © 2013 Elsevier Ltd. All rights reserved.
Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.
2015-01-01
Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320
Owsley, Cynthia
2013-09-20
Older adults commonly report difficulties in visual tasks of everyday living that involve visual clutter, secondary task demands, and time sensitive responses. These difficulties often cannot be attributed to visual sensory impairment. Techniques for measuring visual processing speed under divided attention conditions and among visual distractors have been developed and have established construct validity in that those older adults performing poorly in these tests are more likely to exhibit daily visual task performance problems. Research suggests that computer-based training exercises can increase visual processing speed in older adults and that these gains transfer to enhancement of health and functioning and a slowing in functional and health decline as people grow older. Copyright © 2012 Elsevier Ltd. All rights reserved.
Advanced Multimodal Solutions for Information Presentation
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Godfroy-Cooper, Martine
2018-01-01
High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a possible solution, adaptive systems have been proposed in which the information presented to the user changes as a function of taskcontext-dependent factors. However, this presupposes that adequate methods for detecting andor predicting such factors are developed. Further, research in adaptive systems for aviation suggests that they can sometimes serve to increase workload and reduce situational awareness. It will be critical to develop multimodal display guidelines that include consideration of smart systems that can select the best display method for a particular contextsituation.The scope of the current work is an analysis of potential multimodal display technologies for long duration missions and, in particular, will focus on their potential role in EVA activities. The review will address multimodal (combined visual, auditory andor tactile) displays investigated by NASA, industry, and DoD (Dept. of Defense). It also considers the need for adaptive information systems to accommodate a variety of operational contexts such as crew status (e.g., fatigue, workload level) and task environment (e.g., EVA, habitat, rover, spacecraft). Current approaches to guidelines and best practices for combining modalities for the most effective information displays are also reviewed. Potential issues in developing interface guidelines for the Exploration Information System (EIS) are briefly considered.
Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko
2002-07-01
In order to evaluate developmental change of visual perception, the P300 event-related potentials (ERPs) of visual oddball task were recorded in 34 healthy volunteers ranging from 7 to 37 years of age. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. Visual P300 was dominant at parietal area in almost all subjects. There was a significant difference of P300 latency among the three tasks. Reaction time to the both kind of Kanji tasks were significantly shorter than those to the complicated figure task. P300 latencies to the familiar Kanji, unfamiliar Kanji and figure stimuli decreased until 25.8, 26.9 and 29.4 years of age, respectively, and regression analysis revealed that a positive quadratic function could be fitted to the data. Around 9 years of age, the P300 latency/age slope was largest in the unfamiliar Kanji task. These findings suggest that visual P300 development depends on both the complexity of the tasks and specificity of the stimuli, which might reflect the variety in visual information processing.
Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica
2016-01-01
Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Brain Connectivity and Visual Attention
Parks, Emily L.
2013-01-01
Abstract Emerging hypotheses suggest that efficient cognitive functioning requires the integration of separate, but interconnected cortical networks in the brain. Although task-related measures of brain activity suggest that a frontoparietal network is associated with the control of attention, little is known regarding how components within this distributed network act together or with other networks to achieve various attentional functions. This review considers both functional and structural studies of brain connectivity, as complemented by behavioral and task-related neuroimaging data. These studies show converging results: The frontal and parietal cortical regions are active together, over time, and identifiable frontoparietal networks are active in relation to specific task demands. However, the spontaneous, low-frequency fluctuations of brain activity that occur in the resting state, without specific task demands, also exhibit patterns of connectivity that closely resemble the task-related, frontoparietal attention networks. Both task-related and resting-state networks exhibit consistent relations to behavioral measures of attention. Further, anatomical structure, particularly white matter pathways as defined by diffusion tensor imaging, places constraints on intrinsic functional connectivity. Lastly, connectivity analyses applied to investigate cognitive differences across individuals in both healthy and diseased states suggest that disconnection of attentional networks is linked to deficits in cognitive functioning, and in extreme cases, to disorders of attention. Thus, comprehensive theories of visual attention and their clinical translation depend on the continued integration of behavioral, task-related neuroimaging, and brain connectivity measures. PMID:23597177
Visual selective attention and reading efficiency are related in children.
Casco, C; Tressoldi, P E; Dellantonio, A
1998-09-01
We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.
Robot-assisted laparoscopic ultrasonography for hepatic surgery.
Schneider, Caitlin M; Peng, Peter D; Taylor, Russell H; Dachs, Gregory W; Hasser, Christopher J; DiMaio, Simon P; Choti, Michael A
2012-05-01
This study describes and evaluates a novel, robot-assisted laparoscopic ultrasonographic device for hepatic surgery. Laparoscopic liver surgery is being performed with increasing frequency. One major drawback of this approach is the limited capability of intraoperative ultrasonography (IOUS) using standard laparoscopic devices. Robotic surgery systems offer the opportunity to develop new tools to improve techniques in minimally invasive surgery. This study evaluates a new integrated ultrasonography (US) device with the da Vinci Surgical System for laparoscopic visualization, comparing it with conventional handheld laparoscopic IOUS for performing key tasks in hepatic surgery. A prototype laparoscopic IOUS instrument was developed for the da Vinci Surgical System and compared with a conventional laparoscopic US device in simulation tasks: (1) In vivo porcine hepatic visualization and probe manipulation, (2) lesion detection accuracy, and (3) biopsy precision. Usability was queried by poststudy questionnaire. The robotic US proved better than conventional laparoscopic US in liver surface exploration (85% success vs 73%; P = .030) and tool manipulation (79% vs 57%; P = .028), whereas no difference was detected in lesion identification (63 vs 58; P = .41) and needle biopsy tasks (57 vs 48; P = .11). Subjects found the robotic US to facilitate better probe positioning (80%), decrease fatigue (90%), and be more useful overall (90%) on the post-task questionnaire. We found this robot-assisted IOUS system to be practical and useful in the performance of important tasks required for hepatic surgery, outperforming free-hand laparoscopic IOUS for certain tasks, and was more subjectively usable to the surgeon. Systems such as this may expand the use of robotic surgery for complex operative procedures requiring IOUS. Copyright © 2012 Mosby, Inc. All rights reserved.
Detecting gradual visual changes in colour and brightness agnosia: a double dissociation.
Nijboer, Tanja C W; te Pas, Susan F; van der Smagt, Maarten J
2011-03-09
Two patients, one with colour agnosia and one with brightness agnosia, performed a task that required the detection of gradual temporal changes in colour and brightness. The results for these patients, who showed anaverage or an above-average performance on several tasks designed to test low-level colour and luminance (contrast) perception in the spatial domain, yielded a double dissociation; the brightness agnosic patient was within the normal range for the coloured stimuli, but much slower to detect brightness differences, whereas the colour agnosic patient was within the normal range for the achromatic stimuli, but much slower for the coloured stimuli. These results suggest that a modality-specific impairment in the detection of gradual temporal changes might be related to, if not underlie, the phenomenon of visual agnosia.
Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.
Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe
2015-07-01
Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
Warbrick, Tracy; Reske, Martina; Shah, N Jon
2014-09-22
As cognitive neuroscience methods develop, established experimental tasks are used with emerging brain imaging modalities. Here transferring a paradigm (the visual oddball task) with a long history of behavioral and electroencephalography (EEG) experiments to a functional magnetic resonance imaging (fMRI) experiment is considered. The aims of this paper are to briefly describe fMRI and when its use is appropriate in cognitive neuroscience; illustrate how task design can influence the results of an fMRI experiment, particularly when that task is borrowed from another imaging modality; explain the practical aspects of performing an fMRI experiment. It is demonstrated that manipulating the task demands in the visual oddball task results in different patterns of blood oxygen level dependent (BOLD) activation. The nature of the fMRI BOLD measure means that many brain regions are found to be active in a particular task. Determining the functions of these areas of activation is very much dependent on task design and analysis. The complex nature of many fMRI tasks means that the details of the task and its requirements need careful consideration when interpreting data. The data show that this is particularly important in those tasks relying on a motor response as well as cognitive elements and that covert and overt responses should be considered where possible. Furthermore, the data show that transferring an EEG paradigm to an fMRI experiment needs careful consideration and it cannot be assumed that the same paradigm will work equally well across imaging modalities. It is therefore recommended that the design of an fMRI study is pilot tested behaviorally to establish the effects of interest and then pilot tested in the fMRI environment to ensure appropriate design, implementation and analysis for the effects of interest.
Goddard, Erin; Clifford, Colin W G
2013-04-22
Attending selectively to changes in our visual environment may help filter less important, unchanging information within a scene. Here, we demonstrate that color changes can go unnoticed even when they occur throughout an otherwise static image. The novelty of this demonstration is that it does not rely upon masking by a visual disruption or stimulus motion, nor does it require the change to be very gradual and restricted to a small section of the image. Using a two-interval, forced-choice change-detection task and an odd-one-out localization task, we showed that subjects were slowest to respond and least accurate (implying that change was hardest to detect) when the color changes were isoluminant, smoothly varying, and asynchronous with one another. This profound change blindness offers new constraints for theories of visual change detection, implying that, in the absence of transient signals, changes in color are typically monitored at a coarse spatial scale.
Lanzilotto, Marco; Livi, Alessandro; Maranesi, Monica; Gerbella, Marzio; Barz, Falk; Ruther, Patrick; Fogassi, Leonardo; Rizzolatti, Giacomo; Bonini, Luca
2016-01-01
Grasping relies on a network of parieto-frontal areas lying on the dorsolateral and dorsomedial parts of the hemispheres. However, the initiation and sequencing of voluntary actions also requires the contribution of mesial premotor regions, particularly the pre-supplementary motor area F6. We recorded 233 F6 neurons from 2 monkeys with chronic linear multishank neural probes during reaching–grasping visuomotor tasks. We showed that F6 neurons play a role in the control of forelimb movements and some of them (26%) exhibit visual and/or motor specificity for the target object. Interestingly, area F6 neurons form 2 functionally distinct populations, showing either visually-triggered or movement-related bursts of activity, in contrast to the sustained visual-to-motor activity displayed by ventral premotor area F5 neurons recorded in the same animals and with the same task during previous studies. These findings suggest that F6 plays a role in object grasping and extend existing models of the cortical grasping network. PMID:27733538
Oculomotor Evidence for Top-Down Control following the Initial Saccade
Siebold, Alisha; van Zoest, Wieske; Donk, Mieke
2011-01-01
The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter. PMID:21931603
How do visual and postural cues combine for self-tilt perception during slow pitch rotations?
Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L
2014-11-01
Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.
Scientific Visualization and Computational Science: Natural Partners
NASA Technical Reports Server (NTRS)
Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.