Sample records for tasks involving visual

  1. The involvement of central attention in visual search is determined by task demands.

    PubMed

    Han, Suk Won

    2017-04-01

    Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.

  2. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.

    PubMed

    Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina

    2018-05-14

    The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.

  3. Interference with olfactory memory by visual and verbal tasks.

    PubMed

    Annett, J M; Cook, N M; Leslie, J C

    1995-06-01

    It has been claimed that olfactory memory is distinct from memory in other modalities. This study investigated the effectiveness of visual and verbal tasks in interfering with olfactory memory and included methodological changes from other recent studies. Subjects were allocated to one of four experimental conditions involving interference tasks [no interference task; visual task; verbal task; visual-plus-verbal task] and presented 15 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Recognition and recall performance both showed effects of interference of visual and verbal tasks but there was no effect for time of testing. While the results may be accommodated within a dual coding framework, further work is indicated to resolve theoretical issues relating to task complexity.

  4. Classification of visual and linguistic tasks using eye-movement features.

    PubMed

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  5. Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.

    PubMed

    Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li

    2011-04-01

    New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.

  6. A task-dependent causal role for low-level visual processes in spoken word comprehension.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-08-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Brain functional network connectivity based on a visual task: visual information processing-related brain regions are significantly activated in the task state.

    PubMed

    Yang, Yan-Li; Deng, Hong-Xia; Xing, Gui-Yang; Xia, Xiao-Luan; Li, Hai-Fang

    2015-02-01

    It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception.

  8. Investigating the visual span in comparative search: the effects of task difficulty and divided attention.

    PubMed

    Pomplun, M; Reingold, E M; Shen, J

    2001-09-01

    In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.

  9. Task-related modulation of visual neglect in cancellation tasks

    PubMed Central

    Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon

    2008-01-01

    Unilateral neglect involves deficits of spatial exploration and awareness that do not always affect a fixed portion of extrapersonal space, but may vary with current stimulation and possibly with task demands. Here, we assessed any ‘top-down’, task-related influences on visual neglect, with novel experimental variants of the cancellation test. Many different versions of the cancellation test are used clinically, and can differ in the extent of neglect revealed, though the exact factors determining this are not fully understood. Few cancellation studies have isolated the influence of top-down factors, as typically the stimuli are changed also when comparing different tests. Within each of three cancellation studies here, we manipulated task factors, while keeping visual displays identical across conditions to equate purely bottom-up factors. Our results show that top-down task-demands can significantly modulate neglect as revealed by cancellation on the same displays. Varying the target/non-target discrimination required for identical displays has a significant impact. Varying the judgement required can also have an impact on neglect even when all items are targets, so that non-targets no longer need filtering out. Requiring local versus global aspects of shape to be judged for the same displays also has a substantial impact, but the nature of discrimination required by the task still matters even when local/global level is held constant (e.g. for different colour discriminations on the same stimuli). Finally, an exploratory analysis of lesions among our neglect patients suggested that top-down task-related influences on neglect, as revealed by the new cancellation experiments here, might potentially depend on right superior temporal gyrus surviving the lesion. PMID:18790703

  10. Task-related modulation of visual neglect in cancellation tasks.

    PubMed

    Sarri, Margarita; Greenwood, Richard; Kalra, Lalit; Driver, Jon

    2009-01-01

    Unilateral neglect involves deficits of spatial exploration and awareness that do not always affect a fixed portion of extrapersonal space, but may vary with current stimulation and possibly with task demands. Here, we assessed any 'top-down', task-related influences on visual neglect, with novel experimental variants of the cancellation test. Many different versions of the cancellation test are used clinically, and can differ in the extent of neglect revealed, though the exact factors determining this are not fully understood. Few cancellation studies have isolated the influence of top-down factors, as typically the stimuli are changed also when comparing different tests. Within each of three cancellation studies here, we manipulated task factors, while keeping visual displays identical across conditions to equate purely bottom-up factors. Our results show that top-down task demands can significantly modulate neglect as revealed by cancellation on the same displays. Varying the target/non-target discrimination required for identical displays has a significant impact. Varying the judgement required can also have an impact on neglect even when all items are targets, so that non-targets no longer need filtering out. Requiring local versus global aspects of shape to be judged for the same displays also has a substantial impact, but the nature of discrimination required by the task still matters even when local/global level is held constant (e.g. for different colour discriminations on the same stimuli). Finally, an exploratory analysis of lesions among our neglect patients suggested that top-down task-related influences on neglect, as revealed by the new cancellation experiments here, might potentially depend on right superior temporal gyrus surviving the lesion.

  11. Dynamic functional brain networks involved in simple visual discrimination learning.

    PubMed

    Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis

    2014-10-01

    Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Mixed Initiative Visual Analytics Using Task-Driven Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Cramer, Nicholas O.; Israel, David

    2015-12-07

    Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less

  13. Attention improves encoding of task-relevant features in the human visual cortex.

    PubMed

    Jehee, Janneke F M; Brady, Devin K; Tong, Frank

    2011-06-01

    When spatial attention is directed toward a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer's task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in visual cortical areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature and not when the contrast of the grating had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks, but color-selective responses were enhanced only when color was task relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.

  14. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. A design space of visualization tasks.

    PubMed

    Schulz, Hans-Jörg; Nocke, Thomas; Heitzler, Magnus; Schumann, Heidrun

    2013-12-01

    Knowledge about visualization tasks plays an important role in choosing or building suitable visual representations to pursue them. Yet, tasks are a multi-faceted concept and it is thus not surprising that the many existing task taxonomies and models all describe different aspects of tasks, depending on what these task descriptions aim to capture. This results in a clear need to bring these different aspects together under the common hood of a general design space of visualization tasks, which we propose in this paper. Our design space consists of five design dimensions that characterize the main aspects of tasks and that have so far been distributed across different task descriptions. We exemplify its concrete use by applying our design space in the domain of climate impact research. To this end, we propose interfaces to our design space for different user roles (developers, authors, and end users) that allow users of different levels of expertise to work with it.

  16. The impact of task demand on visual word recognition.

    PubMed

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Attention improves encoding of task-relevant features in the human visual cortex

    PubMed Central

    Jehee, Janneke F.M.; Brady, Devin K.; Tong, Frank

    2011-01-01

    When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information. PMID:21632942

  18. Task-dependent recurrent dynamics in visual cortex

    PubMed Central

    Tajima, Satohiro; Koida, Kowa; Tajima, Chihiro I; Suzuki, Hideyuki; Aihara, Kazuyuki; Komatsu, Hidehiko

    2017-01-01

    The capacity for flexible sensory-action association in animals has been related to context-dependent attractor dynamics outside the sensory cortices. Here, we report a line of evidence that flexibly modulated attractor dynamics during task switching are already present in the higher visual cortex in macaque monkeys. With a nonlinear decoding approach, we can extract the particular aspect of the neural population response that reflects the task-induced emergence of bistable attractor dynamics in a neural population, which could be obscured by standard unsupervised dimensionality reductions such as PCA. The dynamical modulation selectively increases the information relevant to task demands, indicating that such modulation is beneficial for perceptual decisions. A computational model that features nonlinear recurrent interaction among neurons with a task-dependent background input replicates the key properties observed in the experimental data. These results suggest that the context-dependent attractor dynamics involving the sensory cortex can underlie flexible perceptual abilities. DOI: http://dx.doi.org/10.7554/eLife.26868.001 PMID:28737487

  19. Neural Substrates of Visual Spatial Coding and Visual Feedback Control for Hand Movements in Allocentric and Target-Directed Tasks

    PubMed Central

    Thaler, Lore; Goodale, Melvyn A.

    2011-01-01

    Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of

  20. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    PubMed

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Task-Dependent Masked Priming Effects in Visual Word Recognition

    PubMed Central

    Kinoshita, Sachiko; Norris, Dennis

    2012-01-01

    A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316

  2. Visual mental image generation does not overlap with visual short-term memory: a dual-task interference study.

    PubMed

    Borst, Gregoire; Niven, Elaine; Logie, Robert H

    2012-04-01

    Visual mental imagery and working memory are often assumed to play similar roles in high-order functions, but little is known of their functional relationship. In this study, we investigated whether similar cognitive processes are involved in the generation of visual mental images, in short-term retention of those mental images, and in short-term retention of visual information. Participants encoded and recalled visually or aurally presented sequences of letters under two interference conditions: spatial tapping or irrelevant visual input (IVI). In Experiment 1, spatial tapping selectively interfered with the retention of sequences of letters when participants generated visual mental images from aural presentation of the letter names and when the letters were presented visually. In Experiment 2, encoding of the sequences was disrupted by both interference tasks. However, in Experiment 3, IVI interfered with the generation of the mental images, but not with their retention, whereas spatial tapping was more disruptive during retention than during encoding. Results suggest that the temporary retention of visual mental images and of visual information may be supported by the same visual short-term memory store but that this store is not involved in image generation.

  3. Effect of a concurrent auditory task on visual search performance in a driving-related image-flicker task.

    PubMed

    Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John

    2002-01-01

    The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.

  4. Correction of Refractive Errors in Rhesus Macaques (Macaca mulatta) Involved in Visual Research

    PubMed Central

    Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias

    2014-01-01

    Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals. PMID:25427343

  5. Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.

    PubMed

    Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias

    2014-08-01

    Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.

  6. Effects of visual and verbal interference tasks on olfactory memory: the role of task complexity.

    PubMed

    Annett, J M; Leslie, J C

    1996-08-01

    Recent studies have demonstrated that visual and verbal suppression tasks interfere with olfactory memory in a manner which is partially consistent with a dual coding interpretation. However, it has been suggested that total task complexity rather than modality specificity of the suppression tasks might account for the observed pattern of results. This study addressed the issue of whether or not the level of difficulty and complexity of suppression tasks could explain the apparent modality effects noted in earlier experiments. A total of 608 participants were each allocated to one of 19 experimental conditions involving interference tasks which varied suppression type (visual or verbal), nature of complexity (single, double or mixed) and level of difficulty (easy, optimal or difficult) and presented with 13 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Both recognition and recall performance showed an overall effect for suppression nature, suppression level and time of testing with no effect for suppression type. The results lend only limited support to Paivio's (1986) dual coding theory, but have a number of characteristics which suggest that an adequate account of olfactory memory may be broadly similar to current theories of face and object recognition. All of these phenomena might be dealt with by an appropriately modified version of dual coding theory.

  7. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training.

    PubMed

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.

  8. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity.

    PubMed

    Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P

    2018-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.

  9. A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension

    ERIC Educational Resources Information Center

    Ostarek, Markus; Huettig, Falk

    2017-01-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…

  10. Visual perceptual training reconfigures post-task resting-state functional connectivity with a feature-representation region.

    PubMed

    Sarabi, Mitra Taghizadeh; Aoki, Ryuta; Tsumura, Kaho; Keerativittayayut, Ruedeerat; Jimura, Koji; Nakahara, Kiyoshi

    2018-01-01

    The neural mechanisms underlying visual perceptual learning (VPL) have typically been studied by examining changes in task-related brain activation after training. However, the relationship between post-task "offline" processes and VPL remains unclear. The present study examined this question by obtaining resting-state functional magnetic resonance imaging (fMRI) scans of human brains before and after a task-fMRI session involving visual perceptual training. During the task-fMRI session, participants performed a motion coherence discrimination task in which they judged the direction of moving dots with a coherence level that varied between trials (20, 40, and 80%). We found that stimulus-induced activation increased with motion coherence in the middle temporal cortex (MT+), a feature-specific region representing visual motion. On the other hand, stimulus-induced activation decreased with motion coherence in the dorsal anterior cingulate cortex (dACC) and bilateral insula, regions involved in decision making under perceptual ambiguity. Moreover, by comparing pre-task and post-task rest periods, we revealed that resting-state functional connectivity (rs-FC) with the MT+ was significantly increased after training in widespread cortical regions including the bilateral sensorimotor and temporal cortices. In contrast, rs-FC with the MT+ was significantly decreased in subcortical regions including the thalamus and putamen. Importantly, the training-induced change in rs-FC was observed only with the MT+, but not with the dACC or insula. Thus, our findings suggest that perceptual training induces plastic changes in offline functional connectivity specifically in brain regions representing the trained visual feature, emphasising the distinct roles of feature-representation regions and decision-related regions in VPL.

  11. Task-Driven Evaluation of Aggregation in Time Series Visualization

    PubMed Central

    Albers, Danielle; Correll, Michael; Gleicher, Michael

    2014-01-01

    Many visualization tasks require the viewer to make judgments about aggregate properties of data. Recent work has shown that viewers can perform such tasks effectively, for example to efficiently compare the maximums or means over ranges of data. However, this work also shows that such effectiveness depends on the designs of the displays. In this paper, we explore this relationship between aggregation task and visualization design to provide guidance on matching tasks with designs. We combine prior results from perceptual science and graphical perception to suggest a set of design variables that influence performance on various aggregate comparison tasks. We describe how choices in these variables can lead to designs that are matched to particular tasks. We use these variables to assess a set of eight different designs, predicting how they will support a set of six aggregate time series comparison tasks. A crowd-sourced evaluation confirms these predictions. These results not only provide evidence for how the specific visualizations support various tasks, but also suggest using the identified design variables as a tool for designing visualizations well suited for various types of tasks. PMID:25343147

  12. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training

    PubMed Central

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer. PMID:26873777

  13. Non-visual spatial tasks reveal increased interactions with stance postural control.

    PubMed

    Woollacott, Marjorie; Vander Velde, Timothy

    2008-05-07

    The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.

  14. Validating a visual version of the metronome response task.

    PubMed

    Laflamme, Patrick; Seli, Paul; Smilek, Daniel

    2018-02-12

    The metronome response task (MRT)-a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome-was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers' designs require a visual-based primary task.

  15. Temporal attention is involved in the enhancement of attentional capture with task difficulty: an event-related brain potential study.

    PubMed

    Sugimoto, Fumie; Kimura, Motohiro; Takeda, Yuji; Katayama, Jun'ichi

    2017-08-16

    In a three-stimulus oddball task, the amplitude of P3a elicited by deviant stimuli increases with an increase in the difficulty of discriminating between standard and target stimuli (i.e. task-difficulty effect on P3a), indicating that attentional capture by deviant stimuli is enhanced with an increase in task difficulty. This enhancement of attentional capture may be explained in terms of the modulation of modality-nonspecific temporal attention; that is, the participant's attention directed to the predicted timing of stimulus presentation is stronger when the task difficulty increases, which results in enhanced attentional capture. The present study examined this possibility with a modified three-stimulus oddball task consisting of a visual standard, a visual target, and four types of deviant stimuli defined by a combination of two modalities (visual and auditory) and two presentation timings (predicted and unpredicted). We expected that if the modulation of temporal attention is involved in enhanced attentional capture, then the task-difficulty effect on P3a should be reduced for unpredicted compared with predicted deviant stimuli irrespective of their modality; this is because the influence of temporal attention should be markedly weaker for unpredicted compared with predicted deviant stimuli. The results showed that the task-difficulty effect on P3a was significantly reduced for unpredicted compared with predicted deviant stimuli in both the visual and the auditory modalities. This result suggests that the modulation of modality-nonspecific temporal attention induced by the increase in task difficulty is at least partly involved in the enhancement of attentional capture by deviant stimuli.

  16. The Use of Computer-Generated Fading Materials to Teach Visual-Visual Non-Identity Matching Tasks

    ERIC Educational Resources Information Center

    Murphy, Colleen; Figueroa, Maria; Martin, Garry L.; Yu, C. T.; Figueroa, Josue

    2008-01-01

    Many everyday matching tasks taught to persons with developmental disabilities are visual-visual non-identity matching (VVNM) tasks, such as matching the printed word DOG to a picture of a dog, or matching a sock to a shoe. Research has shown that, for participants who have failed a VVNM prototype task, it is very difficult to teach them various…

  17. More visual mind wandering occurrence during visual task performance: Modality of the concurrent task affects how the mind wanders.

    PubMed

    Choi, HeeSun; Geden, Michael; Feng, Jing

    2017-01-01

    Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct.

  18. More visual mind wandering occurrence during visual task performance: Modality of the concurrent task affects how the mind wanders

    PubMed Central

    Choi, HeeSun; Geden, Michael

    2017-01-01

    Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct. PMID:29240817

  19. Visual task performance using a monocular see-through head-mounted display (HMD) while walking.

    PubMed

    Mustonen, Terhi; Berg, Mikko; Kaistinen, Jyrki; Kawai, Takashi; Häkkinen, Jukka

    2013-12-01

    A monocular see-through head-mounted display (HMD) allows the user to view displayed information while simultaneously interacting with the surrounding environment. This configuration lets people use HMDs while they are moving, such as while walking. However, sharing attention between the display and environment can compromise a person's performance in any ongoing task, and controlling one's gait may add further challenges. In this study, the authors investigated how the requirements of HMD-administered visual tasks altered users' performance while they were walking. Twenty-four university students completed 3 cognitive tasks (high- and low-working memory load, visual vigilance) on an HMD while seated and while simultaneously performing a paced walking task in a controlled environment. The results show that paced walking worsened performance (d', reaction time) in all HMD-administered tasks, but visual vigilance deteriorated more than memory performance. The HMD-administered tasks also worsened walking performance (speed, path overruns) in a manner that varied according to the overall demands of the task. These results suggest that people's ability to process information displayed on an HMD may worsen while they are in motion. Furthermore, the use of an HMD can critically alter a person's natural performance, such as their ability to guide and control their gait. In particular, visual tasks that involve constant monitoring of the HMD should be avoided. These findings highlight the need for careful consideration of the type and difficulty of information that can be presented through HMDs while still letting the user achieve an acceptable overall level of performance in various contexts of use. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. Multiple Electrophysiological Markers of Visual-Attentional Processing in a Novel Task Directed toward Clinical Use

    PubMed Central

    Bolduc-Teasdale, Julie; Jolicoeur, Pierre; McKerral, Michelle

    2012-01-01

    Individuals who have sustained a mild brain injury (e.g., mild traumatic brain injury or mild cerebrovascular stroke) are at risk to show persistent cognitive symptoms (attention and memory) after the acute postinjury phase. Although studies have shown that those patients perform normally on neuropsychological tests, cognitive symptoms remain present, and there is a need for more precise diagnostic tools. The aim of this study was to develop precise and sensitive markers for the diagnosis of post brain injury deficits in visual and attentional functions which could be easily translated in a clinical setting. Using electrophysiology, we have developed a task that allows the tracking of the processes involved in the deployment of visual spatial attention from early stages of visual treatment (N1, P1, N2, and P2) to higher levels of cognitive processing (no-go N2, P3a, P3b, N2pc, SPCN). This study presents a description of this protocol and its validation in 19 normal participants. Results indicated the statistically significant presence of all ERPs aimed to be elicited by this novel task. This task could allow clinicians to track the recovery of the mechanisms involved in the deployment of visual-attentional processing, contributing to better diagnosis and treatment management for persons who suffer a brain injury. PMID:23227309

  1. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    PubMed

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Accommodation in Astigmatic Children During Visual Task Performance

    PubMed Central

    Harvey, Erin M.; Miller, Joseph M.; Apple, Howard P.; Parashar, Pavan; Twelker, J. Daniel; Crescioni, Mabel; Davis, Amy L.; Leonard-Green, Tina K.; Campus, Irene; Sherrill, Duane L.

    2014-01-01

    Purpose. To determine the accuracy and stability of accommodation in uncorrected children during visual task performance. Methods. Subjects were second- to seventh-grade children from a highly astigmatic population. Measurements of noncycloplegic right eye spherical equivalent (Mnc) were obtained while uncorrected subjects performed three visual tasks at near (40 cm) and distance (2 m). Tasks included reading sentences with stimulus letter size near acuity threshold and an age-appropriate letter size (high task demands) and viewing a video (low task demand). Repeated measures ANOVA assessed the influence of astigmatism, task demand, and accommodative demand on accuracy (mean Mnc) and variability (mean SD of Mnc) of accommodation. Results. For near and distance analyses, respectively, sample size was 321 and 247, mean age was 10.37 (SD 1.77) and 10.30 (SD 1.74) years, mean cycloplegic M was 0.48 (SD 1.10) and 0.79 diopters (D) (SD 1.00), and mean astigmatism was 0.99 (SD 1.15) and 0.75 D (SD 0.96). Poor accommodative accuracy was associated with high astigmatism, low task demand (video viewing), and high accommodative demand. The negative effect of accommodative demand on accuracy increased with increasing astigmatism, with the poorest accommodative accuracy observed in high astigmats (≥3.00 D) with high accommodative demand/high hyperopia (1.53 D and 2.05 D of underaccommodation for near and distant stimuli, respectively). Accommodative variability was greatest in high astigmats and was uniformly high across task condition. No/low and moderate astigmats showed higher variability for the video task than the reading tasks. Conclusions. Accuracy of accommodation is reduced in uncorrected children with high astigmatism and high accommodative demand/high hyperopia, but improves with increased visual task demand (reading). High astigmats showed the greatest variability in accommodation. PMID:25103265

  3. Influence of social presence on eye movements in visual search tasks.

    PubMed

    Liu, Na; Yu, Ruifeng

    2017-12-01

    This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.

  4. Task relevance induces momentary changes in the functional visual field during reading.

    PubMed

    Kaakinen, Johanna K; Hyönä, Jukka

    2014-02-01

    In the research reported here, we examined whether task demands can induce momentary tunnel vision during reading. More specifically, we examined whether the size of the functional visual field depends on task relevance. Forty participants read an expository text with a specific task in mind while their eye movements were recorded. A display-change paradigm with random-letter strings as preview masks was used to study the size of the functional visual field within sentences that contained task-relevant and task-irrelevant information. The results showed that orthographic parafoveal-on-foveal effects and preview benefits were observed for words within task-irrelevant but not task-relevant sentences. The results indicate that the size of the functional visual field is flexible and depends on the momentary processing demands of a reading task. The higher cognitive processing requirements experienced when reading task-relevant text rather than task-irrelevant text induce momentary tunnel vision, which narrows the functional visual field.

  5. Task-relevant perceptual features can define categories in visual memory too.

    PubMed

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  6. Task-Appropriate Visualizations: Can the Very Same Visualization Format Either Promote or Hinder Learning Depending on the Task Requirements?

    ERIC Educational Resources Information Center

    Soemer, Alexander; Schwan, Stephan

    2016-01-01

    In a series of experiments, we tested a recently proposed hypothesis stating that the degree of alignment between the form of a mental representation resulting from learning with a particular visualization format and the specific requirements of a learning task determines learning performance (task-appropriateness). Groups of participants were…

  7. Effects of speech intelligibility level on concurrent visual task performance.

    PubMed

    Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J

    1994-09-01

    Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.

  8. Sex differences in verbal and visual-spatial tasks under different hemispheric visual-field presentation conditions.

    PubMed

    Boyle, Gregory J; Neumann, David L; Furedy, John J; Westbury, H Rae

    2010-04-01

    This paper reports sex differences in cognitive task performance that emerged when 39 Australian university undergraduates (19 men, 20 women) were asked to solve verbal (lexical) and visual-spatial cognitive matching tasks which varied in difficulty and visual field of presentation. Sex significantly interacted with task type, task difficulty, laterality, and changes in performance across trials. The results revealed that the significant individual-differences' variable of sex does not always emerge as a significant main effect, but instead in terms of significant interactions with other variables manipulated experimentally. Our results show that sex differences must be taken into account when conducting experiments into human cognitive-task performance.

  9. Visual-search models for location-known detection tasks

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.

    2017-03-01

    Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.

  10. Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.

    PubMed

    Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe

    2018-06-02

    This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.

  11. The role of early visual cortex in visual short-term memory and visual attention.

    PubMed

    Offen, Shani; Schluppeck, Denis; Heeger, David J

    2009-06-01

    We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.

  12. Task set induces dynamic reallocation of resources in visual short-term memory.

    PubMed

    Sheremata, Summer L; Shomstein, Sarah

    2017-08-01

    Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.

  13. When Kinesthesia Becomes Visual: A Theoretical Justification for Executing Motor Tasks in Visual Space

    PubMed Central

    Tagliabue, Michele; McIntyre, Joseph

    2013-01-01

    Several experimental studies in the literature have shown that even when performing purely kinesthetic tasks, such as reaching for a kinesthetically felt target with a hidden hand, the brain reconstructs a visual representation of the movement. In our previous studies, however, we did not observe any role of a visual representation of the movement in a purely kinesthetic task. This apparent contradiction could be related to a fundamental difference between the studied tasks. In our study subjects used the same hand to both feel the target and to perform the movement, whereas in most other studies, pointing to a kinesthetic target consisted of pointing with one hand to the finger of the other, or to some other body part. We hypothesize, therefore, that it is the necessity of performing inter-limb transformations that induces a visual representation of purely kinesthetic tasks. To test this hypothesis we asked subjects to perform the same purely kinesthetic task in two conditions: INTRA and INTER. In the former they used the right hand to both perceive the target and to reproduce its orientation. In the latter, subjects perceived the target with the left hand and responded with the right. To quantify the use of a visual representation of the movement we measured deviations induced by an imperceptible conflict that was generated between visual and kinesthetic reference frames. Our hypothesis was confirmed by the observed deviations of responses due to the conflict in the INTER, but not in the INTRA, condition. To reconcile these observations with recent theories of sensori-motor integration based on maximum likelihood estimation, we propose here a new model formulation that explicitly considers the effects of covariance between sensory signals that are directly available and internal representations that are ‘reconstructed’ from those inputs through sensori-motor transformations. PMID:23861903

  14. Comparing capacity coefficient and dual task assessment of visual multitasking workload

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaha, Leslie M.

    Capacity coefficient analysis could offer a theoretically grounded alternative approach to subjective measures and dual task assessment of cognitive workload. Workload capacity or workload efficiency is a human information processing modeling construct defined as the amount of information that can be processed by the visual cognitive system given a specified of amount of time. In this paper, I explore the relationship between capacity coefficient analysis of workload efficiency and dual task response time measures. To capture multitasking performance, I examine how the relatively simple assumptions underlying the capacity construct generalize beyond the single visual decision making tasks. The fundamental toolsmore » for measuring workload efficiency are the integrated hazard and reverse hazard functions of response times, which are defined by log transforms of the response time distribution. These functions are used in the capacity coefficient analysis to provide a functional assessment of the amount of work completed by the cognitive system over the entire range of response times. For the study of visual multitasking, capacity coefficient analysis enables a comparison of visual information throughput as the number of tasks increases from one to two to any number of simultaneous tasks. I illustrate the use of capacity coefficients for visual multitasking on sample data from dynamic multitasking in the modified Multi-attribute Task Battery.« less

  15. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-11-04

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.

  16. Visual Experience Enhances Infants' Use of Task-Relevant Information in an Action Task

    ERIC Educational Resources Information Center

    Wang, Su-hua; Kohne, Lisa

    2007-01-01

    Four experiments examined whether infants' use of task-relevant information in an action task could be facilitated by visual experience in the laboratory. Twelve- but not 9-month-old infants spontaneously used height information and chose an appropriate (taller) cover in search of a hidden tall toy. After watching examples of covering events in a…

  17. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  18. Experimental system for measurement of radiologists' performance by visual search task.

    PubMed

    Maeda, Eriko; Yoshikawa, Takeharu; Nakashima, Ryoichi; Kobayashi, Kazufumi; Yokosawa, Kazuhiko; Hayashi, Naoto; Masutani, Yoshitaka; Yoshioka, Naoki; Akahane, Masaaki; Ohtomo, Kuni

    2013-01-01

    Detective performance of radiologists for "obvious" targets should be evaluated by visual search task instead of ROC analysis, but visual task have not been applied to radiology studies. The aim of this study was to set up an environment that allows visual search task in radiology, to evaluate its feasibility, and to preliminarily investigate the effect of career on the performance. In a darkroom, ten radiologists were asked to answer the type of lesion by pressing buttons, when images without lesions, with bulla, ground-glass nodule, and solid nodule were randomly presented on a display. Differences in accuracy and reaction times depending on board certification were investigated. The visual search task was successfully and feasibly performed. Radiologists were found to have high sensitivity, specificity, positive predictive values and negative predictive values in non-board and board groups. Reaction time was under 1 second for all target types in both groups. Board radiologists were significantly faster in answering for bulla, but there were no significant differences for other targets and values. We developed an experimental system that allows visual search experiment in radiology. Reaction time for detection of bulla was shortened with experience.

  19. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  20. Dynamic Integration of Task-Relevant Visual Features in Posterior Parietal Cortex

    PubMed Central

    Freedman, David J.

    2014-01-01

    Summary The primate visual system consists of multiple hierarchically organized cortical areas, each specialized for processing distinct aspects of the visual scene. For example, color and form are encoded in ventral pathway areas such as V4 and inferior temporal cortex, while motion is preferentially processed in dorsal pathway areas such as the middle temporal area. Such representations often need to be integrated perceptually to solve tasks which depend on multiple features. We tested the hypothesis that the lateral intraparietal area (LIP) integrates disparate task-relevant visual features by recording from LIP neurons in monkeys trained to identify target stimuli composed of conjunctions of color and motion features. We show that LIP neurons exhibit integrative representations of both color and motion features when they are task relevant, and task-dependent shifts of both direction and color tuning. This suggests that LIP plays a role in flexibly integrating task-relevant sensory signals. PMID:25199703

  1. Dementia alters standing postural adaptation during a visual search task in older adult men.

    PubMed

    Jor'dan, Azizah J; McCarten, J Riley; Rottunda, Susan; Stoffregen, Thomas A; Manor, Brad; Wade, Michael G

    2015-04-23

    This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance--in the non-dementia group only--suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus, appears to disrupt this perception-action synergy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Prism adaptation and generalization during visually guided locomotor tasks.

    PubMed

    Alexander, M Scott; Flodin, Brent W G; Marigold, Daniel S

    2011-08-01

    The ability of individuals to adapt locomotion to constraints associated with the complex environments normally encountered in everyday life is paramount for survival. Here, we tested the ability of 24 healthy young adults to adapt to a rightward prism shift (∼11.3°) while either walking and stepping to targets (i.e., precision stepping task) or stepping over an obstacle (i.e., obstacle avoidance task). We subsequently tested for generalization to the other locomotor task. In the precision stepping task, we determined the lateral end-point error of foot placement from the targets. In the obstacle avoidance task, we determined toe clearance and lateral foot placement distance from the obstacle before and after stepping over the obstacle. We found large, rightward deviations in foot placement on initial exposure to prisms in both tasks. The majority of measures demonstrated adaptation over repeated trials, and adaptation rates were dependent mainly on the task. On removal of the prisms, we observed negative aftereffects for measures of both tasks. Additionally, we found a unilateral symmetric generalization pattern in that the left, but not the right, lower limb indicated generalization across the 2 locomotor tasks. These results indicate that the nervous system is capable of rapidly adapting to a visuomotor mismatch during visually demanding locomotor tasks and that the prism-induced adaptation can, at least partially, generalize across these tasks. The results also support the notion that the nervous system utilizes an internal model for the control of visually guided locomotion.

  3. Task- and age-dependent effects of visual stimulus properties on children's explicit numerosity judgments.

    PubMed

    Defever, Emmy; Reynvoet, Bert; Gebuis, Titia

    2013-10-01

    Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  5. Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex

    PubMed Central

    Vaden, Ryan J.; Visscher, Kristina M.

    2015-01-01

    Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and

  6. Hierarchical organization of brain functional networks during visual tasks.

    PubMed

    Zhuo, Zhao; Cai, Shi-Min; Fu, Zhong-Qian; Zhang, Jie

    2011-09-01

    The functional network of the brain is known to demonstrate modular structure over different hierarchical scales. In this paper, we systematically investigated the hierarchical modular organizations of the brain functional networks that are derived from the extent of phase synchronization among high-resolution EEG time series during a visual task. In particular, we compare the modular structure of the functional network from EEG channels with that of the anatomical parcellation of the brain cortex. Our results show that the modular architectures of brain functional networks correspond well to those from the anatomical structures over different levels of hierarchy. Most importantly, we find that the consistency between the modular structures of the functional network and the anatomical network becomes more pronounced in terms of vision, sensory, vision-temporal, motor cortices during the visual task, which implies that the strong modularity in these areas forms the functional basis for the visual task. The structure-function relationship further reveals that the phase synchronization of EEG time series in the same anatomical group is much stronger than that of EEG time series from different anatomical groups during the task and that the hierarchical organization of functional brain network may be a consequence of functional segmentation of the brain cortex.

  7. The functional neuroanatomy of multitasking: combining dual tasking with a short term memory task.

    PubMed

    Deprez, Sabine; Vandenbulcke, Mathieu; Peeters, Ron; Emsell, Louise; Amant, Frederic; Sunaert, Stefan

    2013-09-01

    Insight into the neural architecture of multitasking is crucial when investigating the pathophysiology of multitasking deficits in clinical populations. Presently, little is known about how the brain combines dual-tasking with a concurrent short-term memory task, despite the relevance of this mental operation in daily life and the frequency of complaints related to this process, in disease. In this study we aimed to examine how the brain responds when a memory task is added to dual-tasking. Thirty-three right-handed healthy volunteers (20 females, mean age 39.9 ± 5.8) were examined with functional brain imaging (fMRI). The paradigm consisted of two cross-modal single tasks (a visual and auditory temporal same-different task with short delay), a dual-task combining both single tasks simultaneously and a multi-task condition, combining the dual-task with an additional short-term memory task (temporal same-different visual task with long delay). Dual-tasking compared to both individual visual and auditory single tasks activated a predominantly right-sided fronto-parietal network and the cerebellum. When adding the additional short-term memory task, a larger and more bilateral frontoparietal network was recruited. We found enhanced activity during multitasking in components of the network that were already involved in dual-tasking, suggesting increased working memory demands, as well as recruitment of multitask-specific components including areas that are likely to be involved in online holding of visual stimuli in short-term memory such as occipito-temporal cortex. These results confirm concurrent neural processing of a visual short-term memory task during dual-tasking and provide evidence for an effective fMRI multitasking paradigm. © 2013 Elsevier Ltd. All rights reserved.

  8. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors

    PubMed Central

    Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513

  9. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors.

    PubMed

    Sung, Kyongje; Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.

  10. Transfer of perceptual learning between different visual tasks

    PubMed Central

    McGovern, David P.; Webb, Ben S.; Peirce, Jonathan W.

    2012-01-01

    Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this ‘perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a ‘global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks. PMID:23048211

  11. Transfer of perceptual learning between different visual tasks.

    PubMed

    McGovern, David P; Webb, Ben S; Peirce, Jonathan W

    2012-10-09

    Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.

  12. Body sway at sea for two visual tasks and three stance widths.

    PubMed

    Stoffregen, Thomas A; Villard, Sebastien; Yu, Yawen

    2009-12-01

    On land, body sway is influenced by stance width (the distance between the feet) and by visual tasks engaged in during stance. While wider stance can be used to stabilize the body against ship motion and crewmembers are obliged to carry out many visual tasks while standing, the influence of these factors on the kinematics of body sway has not been studied at sea. Crewmembers of the RN Atlantis stood on a force plate from which we obtained data on the positional variability of the center of pressure (COP). The sea state was 2 on the Beaufort scale. We varied stance width (5 cm, 17 cm, and 30 cm) and the nature of the visual tasks. In the Inspection task, participants viewed a plain piece of white paper, while in the Search task they counted the number of target letters that appeared in a block of text. Search task performance was similar to reports from terrestrial studies. Variability of the COP position was reduced during the Search task relative to the Inspection task. Variability was also reduced during wide stance relative to narrow stance. The influence of stance width was greater than has been observed in terrestrial studies. These results suggest that two factors that influence postural sway on land (variations in stance width and in the nature of visual tasks) also influence sway at sea. We conclude that--in mild sea states--the influence of these factors is not suppressed by ship motion.

  13. Task alters category representations in prefrontal but not high-level visual cortex.

    PubMed

    Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit

    2017-07-15

    A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. The contributions of visual and central attention to visual working memory.

    PubMed

    Souza, Alessandra S; Oberauer, Klaus

    2017-10-01

    We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

  15. Frequency modulation of neural oscillations according to visual task demands.

    PubMed

    Wutz, Andreas; Melcher, David; Samaha, Jason

    2018-02-06

    Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.

  16. Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation.

    PubMed

    Cullen, Ralph H; Rogers, Wendy A; Fisk, Arthur D

    2013-11-01

    Diagnostic automation has been posited to alleviate the high demands of multiple-task environments; however, mixed effects have been found pertaining to performance aid success. To better understand these effects, attention allocation must be studied directly. We developed a multiple-task environment to study the effects of automation on visual attention. Participants interacted with a system providing varying levels of automation and automation reliability and then were transferred to a system with no support. Attention allocation was measured by tracking the number of times each task was viewed. We found that participants receiving automation allocated their time according to the task frequency and that tasks that benefited most from automation were most harmed when it was removed. The results suggest that the degree to which automation affects multiple-task performance is dependent on the relative attributes of the tasks involved. Moreover, there is an inverse relationship between support and cost when automation fails. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze

    PubMed Central

    Kent, Brendon W.; Yang, Fang-Chi; Burwell, Rebecca D.

    2014-01-01

    Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2 that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species. In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task. We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes. PMID:24638057

  18. Task Demands Control Acquisition and Storage of Visual Information

    ERIC Educational Resources Information Center

    Droll, Jason A.; Hayhoe, Mary M.; Triesch, Jochen; Sullivan, Brian T.

    2005-01-01

    Attention and working memory limitations set strict limits on visual representations, yet researchers have little appreciation of how these limits constrain the acquisition of information in ongoing visually guided behavior. Subjects performed a brick sorting task in a virtual environment. A change was made to 1 of the features of the brick being…

  19. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  20. A taxonomy of visualization tasks for the analysis of biological pathway data.

    PubMed

    Murray, Paul; McGee, Fintan; Forbes, Angus G

    2017-02-15

    Understanding complicated networks of interactions and chemical components is essential to solving contemporary problems in modern biology, especially in domains such as cancer and systems research. In these domains, biological pathway data is used to represent chains of interactions that occur within a given biological process. Visual representations can help researchers understand, interact with, and reason about these complex pathways in a number of ways. At the same time, these datasets offer unique challenges for visualization, due to their complexity and heterogeneity. Here, we present taxonomy of tasks that are regularly performed by researchers who work with biological pathway data. The generation of these tasks was done in conjunction with interviews with several domain experts in biology. These tasks require further classification than is provided by existing taxonomies. We also examine existing visualization techniques that support each task, and we discuss gaps in the existing visualization space revealed by our taxonomy. Our taxonomy is designed to support the development and design of future biological pathway visualization applications. We conclude by suggesting future research directions based on our taxonomy and motivated by the comments received by our domain experts.

  1. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    PubMed

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can

  2. Beyond a mask and against the bottleneck: retroactive dual-task interference during working memory consolidation of a masked visual target.

    PubMed

    Nieuwenstein, Mark; Wyble, Brad

    2014-06-01

    While studies on visual memory commonly assume that the consolidation of a visual stimulus into working memory is interrupted by a trailing mask, studies on dual-task interference suggest that the consolidation of a stimulus can continue for several hundred milliseconds after a mask. As a result, estimates of the time course of working memory consolidation differ more than an order of magnitude. Here, we contrasted these opposing views by examining if and for how long the processing of a masked display of visual stimuli can be disturbed by a trailing 2-alternative forced choice task (2-AFC; a color discrimination task or a visual or auditory parity judgment task). The results showed that the presence of the 2-AFC task produced a pronounced retroactive interference effect that dissipated across stimulus onset asynchronies of 250-1,000 ms, indicating that the processing elicited by the 2-AFC task interfered with the gradual consolidation of the earlier shown stimuli. Furthermore, this interference effect occurred regardless of whether the to-be-remembered stimuli comprised a string of letters or an unfamiliar complex visual shape, and it occurred regardless of whether these stimuli were masked. Conversely, the interference effect was reduced when the memory load for the 1st task was reduced, or when the 2nd task was a color detection task that did not require decision making. Taken together, these findings show that the formation of a durable and consciously accessible working memory trace for a briefly shown visual stimulus can be disturbed by a trailing 2-AFC task for up to several hundred milliseconds after the stimulus has been masked. By implication, the current findings challenge the common view that working memory consolidation involves an immutable central processing bottleneck, and they also make clear that consolidation does not stop when a stimulus is masked. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    PubMed

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual

  4. Attainment of Developmental Tasks by Adolescents with Visual Impairments and Sighted Adolescents

    ERIC Educational Resources Information Center

    Pfeiffer, Jens P.; Pinquart, Martin

    2011-01-01

    This study compared the achievement of developmental tasks by 158 adolescents with visual impairments to that of 158 sighted adolescents. The groups did not differ in the fulfillment of 9 of 11 tasks. However, those with visual impairments were less successful in peer-group integration and forming intimate relationships. (Contains 4 tables.)

  5. Evaluation of several secondary tasks in the determination of permissible time delays in simulator visual and motion cues

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1978-01-01

    The effect of secondary tasks in determining permissible time delays in visual-motion simulation of a pursuit tracking task was examined. A single subject, a single set of aircraft handling qualities, and a single motion condition in tracking a target aircraft that oscillates sinusoidally in altitude were used. In addition to the basic simulator delays the results indicate that the permissible time delay is about 250 msec for either a tapping task, an adding task, or an audio task and is approximately 125 msec less than when no secondary task is involved. The magnitudes of the primary task performance measures, however, differ only for the tapping task. A power spectraldensity analysis basically confirms the result by comparing the root-mean-square performance measures. For all three secondary tasks, the total pilot workload was quite high.

  6. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  7. Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    PubMed Central

    Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.

    2011-01-01

    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344

  8. Visual processing speed.

    PubMed

    Owsley, Cynthia

    2013-09-20

    Older adults commonly report difficulties in visual tasks of everyday living that involve visual clutter, secondary task demands, and time sensitive responses. These difficulties often cannot be attributed to visual sensory impairment. Techniques for measuring visual processing speed under divided attention conditions and among visual distractors have been developed and have established construct validity in that those older adults performing poorly in these tests are more likely to exhibit daily visual task performance problems. Research suggests that computer-based training exercises can increase visual processing speed in older adults and that these gains transfer to enhancement of health and functioning and a slowing in functional and health decline as people grow older. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback.

    PubMed

    Lemole, G Michael; Banerjee, P Pat; Luciano, Cristian; Neckrysh, Sergey; Charbel, Fady T

    2007-07-01

    Mastery of the neurosurgical skill set involves many hours of supervised intraoperative training. Convergence of political, economic, and social forces has limited neurosurgical resident operative exposure. There is need to develop realistic neurosurgical simulations that reproduce the operative experience, unrestricted by time and patient safety constraints. Computer-based, virtual reality platforms offer just such a possibility. The combination of virtual reality with dynamic, three-dimensional stereoscopic visualization, and haptic feedback technologies makes realistic procedural simulation possible. Most neurosurgical procedures can be conceptualized and segmented into critical task components, which can be simulated independently or in conjunction with other modules to recreate the experience of a complex neurosurgical procedure. We use the ImmersiveTouch (ImmersiveTouch, Inc., Chicago, IL) virtual reality platform, developed at the University of Illinois at Chicago, to simulate the task of ventriculostomy catheter placement as a proof-of-concept. Computed tomographic data are used to create a virtual anatomic volume. Haptic feedback offers simulated resistance and relaxation with passage of a virtual three-dimensional ventriculostomy catheter through the brain parenchyma into the ventricle. A dynamic three-dimensional graphical interface renders changing visual perspective as the user's head moves. The simulation platform was found to have realistic visual, tactile, and handling characteristics, as assessed by neurosurgical faculty, residents, and medical students. We have developed a realistic, haptics-based virtual reality simulator for neurosurgical education. Our first module recreates a critical component of the ventriculostomy placement task. This approach to task simulation can be assembled in a modular manner to reproduce entire neurosurgical procedures.

  10. Comparing two types of engineering visualizations: task-related manipulations matter.

    PubMed

    Cölln, Martin C; Kusch, Kerstin; Helmert, Jens R; Kohler, Petra; Velichkovsky, Boris M; Pannasch, Sebastian

    2012-01-01

    This study focuses on the comparison of traditional engineering drawings with a CAD (computer aided design) visualization in terms of user performance and eye movements in an applied context. Twenty-five students of mechanical engineering completed search tasks for measures in two distinct depictions of a car engine component (engineering drawing vs. CAD model). Besides spatial dimensionality, the display types most notably differed in terms of information layout, access and interaction options. The CAD visualization yielded better performance, if users directly manipulated the object, but was inferior, if employed in a conventional static manner, i.e. inspecting only predefined views. An additional eye movement analysis revealed longer fixation durations and a stronger increase of task-relevant fixations over time when interacting with the CAD visualization. This suggests a more focused extraction and filtering of information. We conclude that the three-dimensional CAD visualization can be advantageous if its ability to manipulate is used. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. The effect of haptic guidance and visual feedback on learning a complex tennis task.

    PubMed

    Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert

    2013-11-01

    While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on

  12. Effect of visual feedback on brain activation during motor tasks: an FMRI study.

    PubMed

    Noble, Jeremy W; Eng, Janice J; Boyd, Lara A

    2013-07-01

    This study examined the effect of visual feedback and force level on the neural mechanisms responsible for the performance of a motor task. We used a voxel-wise fMRI approach to determine the effect of visual feedback (with and without) during a grip force task at 35% and 70% of maximum voluntary contraction. Two areas (contralateral rostral premotor cortex and putamen) displayed an interaction between force and feedback conditions. When the main effect of feedback condition was analyzed, higher activation when visual feedback was available was found in 22 of the 24 active brain areas, while the two other regions (contralateral lingual gyrus and ipsilateral precuneus) showed greater levels of activity when no visual feedback was available. The results suggest that there is a potentially confounding influence of visual feedback on brain activation during a motor task, and for some regions, this is dependent on the level of force applied.

  13. Concurrent deployment of visual attention and response selection bottleneck in a dual-task: Electrophysiological and behavioural evidence.

    PubMed

    Reimer, Christina B; Strobach, Tilo; Schubert, Torsten

    2017-12-01

    Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.

  14. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    PubMed Central

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  15. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. How task demands shape brain responses to visual food cues.

    PubMed

    Pohl, Tanja Maria; Tempelmann, Claus; Noesselt, Toemme

    2017-06-01

    Several previous imaging studies have aimed at identifying the neural basis of visual food cue processing in humans. However, there is little consistency of the functional magnetic resonance imaging (fMRI) results across studies. Here, we tested the hypothesis that this variability across studies might - at least in part - be caused by the different tasks employed. In particular, we assessed directly the influence of task set on brain responses to food stimuli with fMRI using two tasks (colour vs. edibility judgement, between-subjects design). When participants judged colour, the left insula, the left inferior parietal lobule, occipital areas, the left orbitofrontal cortex and other frontal areas expressed enhanced fMRI responses to food relative to non-food pictures. However, when judging edibility, enhanced fMRI responses to food pictures were observed in the superior and middle frontal gyrus and in medial frontal areas including the pregenual anterior cingulate cortex and ventromedial prefrontal cortex. This pattern of results indicates that task sets can significantly alter the neural underpinnings of food cue processing. We propose that judging low-level visual stimulus characteristics - such as colour - triggers stimulus-related representations in the visual and even in gustatory cortex (insula), whereas discriminating abstract stimulus categories activates higher order representations in both the anterior cingulate and prefrontal cortex. Hum Brain Mapp 38:2897-2912, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Executive function deficits in team sport athletes with a history of concussion revealed by a visual-auditory dual task paradigm.

    PubMed

    Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa

    2017-02-01

    The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.

  18. Stimulus-dependent modulation of visual neglect in a touch-screen cancellation task.

    PubMed

    Keller, Ingo; Volkening, Katharina; Garbacenkaite, Ruta

    2015-05-01

    Patients with left-sided neglect frequently show omissions and repetitive behavior on cancellation tests. Using a touch-screen-based cancellation task, we tested how visual feedback and distracters influence the number of omissions and perseverations. Eighteen patients with left-sided visual neglect and 18 healthy controls performed four different cancellation tasks on an iPad touch screen: no feedback (the display did not change during the task), visual feedback (touched targets changed their color from black to green), visual feedback with distracters (20 distracters were evenly embedded in the display; detected targets changed their color from black to green), vanishing targets (touched targets disappeared from the screen). Except for the condition with vanishing targets, neglect patients had significantly more omissions and perseverations than healthy controls in the remaining three subtests. Both conditions providing feedback by changing the target color showed the highest number of omissions. Erasure of targets nearly diminished omissions completely. The highest rate of perseverations was observed in the no-feedback condition. The implementation of distracters led to a moderate number of perseverations. Visual feedback without distracters and vanishing targets abolished perseverations nearly completely. Visual feedback and the presence of distracters aggravated hemispatial neglect. This finding is compatible with impaired disengagement from the ipsilesional side as an important factor of visual neglect. Improvement of cancellation behavior with vanishing targets could have therapeutic implications. (c) 2015 APA, all rights reserved).

  19. Task modulates functional connectivity networks in free viewing behavior.

    PubMed

    Seidkhani, Hossein; Nikolaev, Andrey R; Meghanathan, Radha Nila; Pezeshk, Hamid; Masoudi-Nejad, Ali; van Leeuwen, Cees

    2017-10-01

    In free visual exploration, eye-movement is immediately followed by dynamic reconfiguration of brain functional connectivity. We studied the task-dependency of this process in a combined visual search-change detection experiment. Participants viewed two (nearly) same displays in succession. First time they had to find and remember multiple targets among distractors, so the ongoing task involved memory encoding. Second time they had to determine if a target had changed in orientation, so the ongoing task involved memory retrieval. From multichannel EEG recorded during 200 ms intervals time-locked to fixation onsets, we estimated the functional connectivity using a weighted phase lag index at the frequencies of theta, alpha, and beta bands, and derived global and local measures of the functional connectivity graphs. We found differences between both memory task conditions for several network measures, such as mean path length, radius, diameter, closeness and eccentricity, mainly in the alpha band. Both the local and the global measures indicated that encoding involved a more segregated mode of operation than retrieval. These differences arose immediately after fixation onset and persisted for the entire duration of the lambda complex, an evoked potential commonly associated with early visual perception. We concluded that encoding and retrieval differentially shape network configurations involved in early visual perception, affecting the way the visual input is processed at each fixation. These findings demonstrate that task requirements dynamically control the functional connectivity networks involved in early visual perception. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Effects of age and auditory and visual dual tasks on closed-road driving performance.

    PubMed

    Chaparro, Alex; Wood, Joanne M; Carberry, Trent

    2005-08-01

    This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate

  1. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks

    PubMed Central

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395

  2. Brain activity during divided and selective attention to auditory and visual sentence comprehension tasks.

    PubMed

    Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo

    2015-01-01

    Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.

  3. Monitoring supports performance in a dual-task paradigm involving a risky decision-making task and a working memory task

    PubMed Central

    Gathmann, Bettina; Schiebener, Johannes; Wolf, Oliver T.; Brand, Matthias

    2015-01-01

    Performing two cognitively demanding tasks at the same time is known to decrease performance. The current study investigates the underlying executive functions of a dual-tasking situation involving the simultaneous performance of decision making under explicit risk and a working memory task. It is suggested that making a decision and performing a working memory task at the same time should particularly require monitoring—an executive control process supervising behavior and the state of processing on two tasks. To test the role of a supervisory/monitoring function in such a dual-tasking situation we investigated 122 participants with the Game of Dice Task plus 2-back task (GDT plus 2-back task). This dual task requires participants to make decisions under risk and to perform a 2-back working memory task at the same time. Furthermore, a task measuring a set of several executive functions gathered in the term concept formation (Modified Card Sorting Test, MCST) and the newly developed Balanced Switching Task (BST), measuring monitoring in particular, were used. The results demonstrate that concept formation and monitoring are involved in the simultaneous performance of decision making under risk and a working memory task. In particular, the mediation analysis revealed that BST performance partially mediates the influence of MCST performance on the GDT plus 2-back task. These findings suggest that monitoring is one important subfunction for superior performance in a dual-tasking situation including decision making under risk and a working memory task. PMID:25741308

  4. Effects of Visual Feedback and Memory on Unintentional Drifts in Performance During Finger Pressing Tasks

    PubMed Central

    Solnik, Stanislaw; Qiao, Mu; Latash, Mark L.

    2017-01-01

    This study tested two hypotheses on the nature of unintentional force drifts elicited by removing visual feedback during accurate force production tasks. The role of working memory (memory hypothesis) was explored in tasks with continuous force production, intermittent force production, and rest intervals over the same time interval. The assumption of unintentional drifts in referent coordinate for the fingertips was tested using manipulations of visual feedback: Young healthy subjects performed accurate steady-state force production tasks by pressing with the two index fingers on individual force sensors with visual feedback on the total force, sharing ratio, both, or none. Predictions based on the memory hypothesis have been falsified. In particular, we observed consistent force drifts to lower force values during continuous force production trials only. No force drift or drifts to higher forces were observed during intermittent force production trials and following rest intervals. The hypotheses based on the idea of drifts in referent finger coordinates have been confirmed. In particular, we observed superposition of two drift processes: A drift of total force to lower magnitudes and a drift of the sharing ratio to 50:50. When visual feedback on total force only was provided, the two finger forces showed drifts in opposite directions. We interpret the findings as evidence for the control of motor actions with changes in referent coordinates for participating effectors. Unintentional drifts in performance are viewed as natural relaxation processes in the involved systems; their typical time reflects stability in the direction of the drift. The magnitude of the drift was higher in the right (dominant) hand, which is consistent with the dynamic dominance hypothesis. PMID:28168396

  5. The influence of time on task on mind wandering and visual working memory.

    PubMed

    Krimsky, Marissa; Forster, Daniel E; Llabre, Maria M; Jha, Amishi P

    2017-12-01

    Working memory relies on executive resources for successful task performance, with higher demands necessitating greater resource engagement. In addition to mnemonic demands, prior studies suggest that internal sources of distraction, such as mind wandering (i.e., having off-task thoughts) and greater time on task, may tax executive resources. Herein, the consequences of mnemonic demand, mind wandering, and time on task were investigated during a visual working memory task. Participants (N=143) completed a delayed-recognition visual working memory task, with mnemonic load for visual objects manipulated across trials (1 item=low load; 2 items=high load) and subjective mind wandering assessed intermittently throughout the experiment using a self-report Likert-type scale (1=on-task, 6=off-task). Task performance (correct/incorrect response) and self-reported mind wandering data were evaluated by hierarchical linear modeling to track trial-by-trial fluctuations. Performance declined with greater time on task, and the rate of decline was steeper for high vs low load trials. Self-reported mind wandering increased over time, and significantly varied asa function of both load and time on task. Participants reported greater mind wandering at the beginning of the experiment for low vs. high load trials; however, with greater time on task, more mind wandering was reported during high vs. low load trials. These results suggest that the availability of executive resources in support of working memory maintenance processes fluctuates in a demand-sensitive manner with time on task, and may be commandeered by mind wandering. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  7. Cross-cultural differences for three visual memory tasks in Brazilian children.

    PubMed

    Santos, F H; Mello, C B; Bueno, O F A; Dellatolas, G

    2005-10-01

    Norms for three visual memory tasks, including Corsi's block tapping test and the BEM 144 complex figures and visual recognition, were developed for neuropsychological assessment in Brazilian children. The tasks were measured in 127 children ages 7 to 10 years from rural and urban areas of the States of São Paulo and Minas Gerais. Analysis indicated age-related but not sex-related differences. A cross-cultural effect was observed in relation to copying and recall of Complex pictures. Different performances between rural and urban children were noted.

  8. Motivational Influences on Cognition: Task Involvement, Ego Involvement, and Depth of Information Processing.

    ERIC Educational Resources Information Center

    Graham, Sandra; Golan, Shari

    1991-01-01

    Task involvement and ego involvement were studied in relation to depth of information processing for 126 fifth and sixth graders in 2 experiments. Ego involvement resulted in poorer word recall at deep rather than shallow information processing levels. Implications for the study of motivation are discussed. (SLD)

  9. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  10. The influence of visual and vestibular orientation cues in a clock reading task.

    PubMed

    Davidenko, Nicolas; Cheong, Yeram; Waterman, Amanda; Smith, Jacob; Anderson, Barrett; Harmon, Sarah

    2018-05-23

    We investigated how performance in the real-life perceptual task of analog clock reading is influenced by the clock's orientation with respect to egocentric, gravitational, and visual-environmental reference frames. In Experiment 1, we designed a simple clock-reading task and found that observers' reaction time to correctly tell the time depends systematically on the clock's orientation. In Experiment 2, we dissociated egocentric from environmental reference frames by having participants sit upright or lie sideways while performing the task. We found that both reference frames substantially contribute to response times in this task. In Experiment 3, we placed upright or rotated participants in an upright or rotated immersive virtual environment, which allowed us to further dissociate vestibular from visual cues to the environmental reference frame. We found evidence of environmental reference frame effects only when visual and vestibular cues were aligned. We discuss the implications for the design of remote and head-mounted displays. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    PubMed Central

    Fengler, Ineke; Nava, Elena; Röder, Brigitte

    2015-01-01

    Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166

  12. Age-related differences in processing visual device and task characteristics when using technical devices.

    PubMed

    Oehl, M; Sutter, C

    2015-05-01

    With aging visual feedback becomes increasingly relevant in action control. Consequently, visual device and task characteristics should more and more affect tool use. Focussing on late working age, the present study aims to investigate age-related differences in processing task irrelevant (display size) and task relevant visual information (task difficulty). Young and middle-aged participants (20-35 and 36-64 years of age, respectively) sat in front of a touch screen with differently sized active touch areas (4″ to 12″) and performed pointing tasks with differing task difficulties (1.8-5 bits). Both display size and age affected pointing performance, but the two variables did not interact and aiming duration moderated both effects. Furthermore, task difficulty affected the pointing durations of middle-aged adults moreso than those of young adults. Again, aiming duration accounted for the variance in the data. The onset of an age-related decline in aiming duration can be clearly located in middle adulthood. Thus, the fine psychomotor ability "aiming" is a moderator and predictor for age-related differences in pointing tasks. The results support a user-specific design for small technical devices with touch interfaces. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Task-dependent engagements of the primary visual cortex during kinesthetic and visual motor imagery.

    PubMed

    Mizuguchi, Nobuaki; Nakamura, Maiko; Kanosue, Kazuyuki

    2017-01-01

    Motor imagery can be divided into kinesthetic and visual aspects. In the present study, we investigated excitability in the corticospinal tract and primary visual cortex (V1) during kinesthetic and visual motor imagery. To accomplish this, we measured motor evoked potentials (MEPs) and probability of phosphene occurrence during the two types of motor imageries of finger tapping. The MEPs and phosphenes were induced by transcranial magnetic stimulation to the primary motor cortex and V1, respectively. The amplitudes of MEPs and probability of phosphene occurrence during motor imagery were normalized based on the values obtained at rest. Corticospinal excitability increased during both kinesthetic and visual motor imagery, while excitability in V1 was increased only during visual motor imagery. These results imply that modulation of cortical excitability during kinesthetic and visual motor imagery is task dependent. The present finding aids in the understanding of the neural mechanisms underlying motor imagery and provides useful information for the use of motor imagery in rehabilitation or motor imagery training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Assessment of image quality in soft tissue and bone visualization tasks for a dedicated extremity cone-beam CT system.

    PubMed

    Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H

    2015-06-01

    To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.

  15. Global/local processing of hierarchical visual stimuli in a conflict-choice task by capuchin monkeys (Sapajus spp.).

    PubMed

    Truppa, Valentina; Carducci, Paola; De Simone, Diego Antonio; Bisazza, Angelo; De Lillo, Carlo

    2017-03-01

    In the last two decades, comparative research has addressed the issue of how the global and local levels of structure of visual stimuli are processed by different species, using Navon-type hierarchical figures, i.e. smaller local elements that form larger global configurations. Determining whether or not the variety of procedures adopted to test different species with hierarchical figures are equivalent is of crucial importance to ensure comparability of results. Among non-human species, global/local processing has been extensively studied in tufted capuchin monkeys using matching-to-sample tasks with hierarchical patterns. Local dominance has emerged consistently in these New World primates. In the present study, we assessed capuchins' processing of hierarchical stimuli with a method frequently adopted in studies of global/local processing in non-primate species: the conflict-choice task. Different from the matching-to-sample procedure, this task involved processing local and global information retained in long-term memory. Capuchins were trained to discriminate between consistent hierarchical stimuli (similar global and local shape) and then tested with inconsistent hierarchical stimuli (different global and local shapes). We found that capuchins preferred the hierarchical stimuli featuring the correct local elements rather than those with the correct global configuration. This finding confirms that capuchins' local dominance, typically observed using matching-to-sample procedures, is also expressed as a local preference in the conflict-choice task. Our study adds to the growing body of comparative studies on visual grouping functions by demonstrating that the methods most frequently used in the literature on global/local processing produce analogous results irrespective of extent of the involvement of memory processes.

  16. Patterned-string tasks: relation between fine motor skills and visual-spatial abilities in parrots.

    PubMed

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals.

  17. Metacognition in Monkeys during an Oculomotor Task

    ERIC Educational Resources Information Center

    Middlebrooks, Paul G.; Sommer, Marc A.

    2011-01-01

    This study investigated whether rhesus monkeys show evidence of metacognition in a reduced, visual oculomotor task that is particularly suitable for use in fMRI and electrophysiology. The 2-stage task involved punctate visual stimulation and saccadic eye movement responses. In each trial, monkeys made a decision and then made a bet. To earn…

  18. Inhibition in Dot Comparison Tasks

    ERIC Educational Resources Information Center

    Clayton, Sarah; Gilmore, Camilla

    2015-01-01

    Dot comparison tasks are commonly used to index an individual's Approximate Number System (ANS) acuity, but the cognitive processes involved in completing these tasks are poorly understood. Here, we investigated how factors including numerosity ratio, set size and visual cues influence task performance. Forty-four children aged 7-9 years completed…

  19. Walking with eyes closed is easier than walking with eyes open without visual cues: The Romberg task versus the goggle task.

    PubMed

    Yelnik, A P; Tasseel Ponche, S; Andriantsifanetra, C; Provost, C; Calvalido, A; Rougier, P

    2015-12-01

    The Romberg test, with the subject standing and with eyes closed, gives diagnostic arguments for a proprioceptive disorder. Closing the eyes is also used in balance rehabilitation as a main way to stimulate neural plasticity with proprioceptive, vestibular and even cerebellar disorders. Nevertheless, standing and walking with eyes closed or with eyes open in the dark are certainly 2 different tasks. We aimed to compare walking with eyes open, closed and wearing black or white goggles in healthy subjects. A total of 50 healthy participants were randomly divided into 2 protocols and asked to walk on a 5-m pressure-sensitive mat, under 3 conditions: (1) eyes open (EO), eyes closed (EC) and eyes open with black goggles (BG) and (2) EO, EO with BG and with white goggles (WG). Gait was described by velocity (m·s(-1)), double support (% gait cycle), gait variability index (GVI/100) and exit from the mat (%). Analysis involved repeated measures Anova, Holm-Sidak's multiple comparisons test for parametric parameters (GVI) and Dunn's multiple comparisons test for non-parametric parameters. As compared with walking with EC, walking with BG produced lower median velocity, by 6% (EO 1.26; BG 1.01 vs EC 1.07 m·s(-1), P=0.0328), and lower mean GVI, by 8% (EO 91.8; BG 66.8 vs EC 72.24, P=0.009). Parameters did not differ between walking under the BG and WG conditions. The goggle task increases the difficulty in walking with visual deprivation compared to the Romberg task, so the goggle task can be proposed to gradually increase the difficulty in walking with visual deprivation (from eyes closed to eyes open in black goggles). Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  20. Psychophysical testing of visual prosthetic devices: a call to establish a multi-national joint task force

    NASA Astrophysics Data System (ADS)

    Rizzo, Joseph F., III; Ayton, Lauren N.

    2014-04-01

    Recent advances in the field of visual prostheses, as showcased in this special feature of Journal of Neural Engineering , have led to promising results from clinical trials of a number of devices. However, as noted by these groups there are many challenges involved in assessing vision of people with profound vision loss. As such, it is important that there is consistency in the methodology and reporting standards for clinical trials of visual prostheses and, indeed, the broader vision restoration research field. Two visual prosthesis research groups, the Boston Retinal Implant Project (BRIP) and Bionic Vision Australia (BVA), have agreed to work cooperatively to establish a multi-national Joint Task Force. The aim of this Task Force will be to develop a consensus statement to guide the methods used to conduct and report psychophysical and clinical results of humans who receive visual prosthetic devices. The overarching goal is to ensure maximum benefit to the implant recipients, not only in the outcomes of the visual prosthesis itself, but also in enabling them to obtain accurate information about this research with ease. The aspiration to develop a Joint Task Force was first promulgated at the inaugural 'The Eye and the Chip' meeting in September 2000. This meeting was established to promote the development of the visual prosthetic field by applying the principles of inclusiveness, openness, and collegiality among the growing body of researchers in this field. These same principles underlie the intent of this Joint Task Force to enhance the quality of psychophysical research within our community. Despite prior efforts, a critical mass of interested parties could not congeal. Renewed interest for developing joint guidelines has developed recently because of a growing awareness of the challenges of obtaining reliable measurements of visual function in patients who are severely visually impaired (in whom testing is inherently noisy), and of the importance of

  1. Cortical networks involved in visual awareness independent of visual attention.

    PubMed

    Webb, Taylor W; Igelström, Kajsa M; Schurger, Aaron; Graziano, Michael S A

    2016-11-29

    It is now well established that visual attention, as measured with standard spatial attention tasks, and visual awareness, as measured by report, can be dissociated. It is possible to attend to a stimulus with no reported awareness of the stimulus. We used a behavioral paradigm in which people were aware of a stimulus in one condition and unaware of it in another condition, but the stimulus drew a similar amount of spatial attention in both conditions. The paradigm allowed us to test for brain regions active in association with awareness independent of level of attention. Participants performed the task in an MRI scanner. We looked for brain regions that were more active in the aware than the unaware trials. The largest cluster of activity was obtained in the temporoparietal junction (TPJ) bilaterally. Local independent component analysis (ICA) revealed that this activity contained three distinct, but overlapping, components: a bilateral, anterior component; a left dorsal component; and a right dorsal component. These components had brain-wide functional connectivity that partially overlapped the ventral attention network and the frontoparietal control network. In contrast, no significant activity in association with awareness was found in the banks of the intraparietal sulcus, a region connected to the dorsal attention network and traditionally associated with attention control. These results show the importance of separating awareness and attention when testing for cortical substrates. They are also consistent with a recent proposal that awareness is associated with ventral attention areas, especially in the TPJ.

  2. Functional Activation during the Rapid Visual Information Processing Task in a Middle Aged Cohort: An fMRI Study.

    PubMed

    Neale, Chris; Johnston, Patrick; Hughes, Matthew; Scholey, Andrew

    2015-01-01

    The Rapid Visual Information Processing (RVIP) task, a serial discrimination task where task performance believed to reflect sustained attention capabilities, is widely used in behavioural research and increasingly in neuroimaging studies. To date, functional neuroimaging research into the RVIP has been undertaken using block analyses, reflecting the sustained processing involved in the task, but not necessarily the transient processes associated with individual trial performance. Furthermore, this research has been limited to young cohorts. This study assessed the behavioural and functional magnetic resonance imaging (fMRI) outcomes of the RVIP task using both block and event-related analyses in a healthy middle aged cohort (mean age = 53.56 years, n = 16). The results show that the version of the RVIP used here is sensitive to changes in attentional demand processes with participants achieving a 43% accuracy hit rate in the experimental task compared with 96% accuracy in the control task. As shown by previous research, the block analysis revealed an increase in activation in a network of frontal, parietal, occipital and cerebellar regions. The event related analysis showed a similar network of activation, seemingly omitting regions involved in the processing of the task (as shown in the block analysis), such as occipital areas and the thalamus, providing an indication of a network of regions involved in correct trial performance. Frontal (superior and inferior frontal gryi), parietal (precuenus, inferior parietal lobe) and cerebellar regions were shown to be active in both the block and event-related analyses, suggesting their importance in sustained attention/vigilance. These networks and the differences between them are discussed in detail, as well as implications for future research in middle aged cohorts.

  3. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    PubMed

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2

  4. Normal Performance in Non-Visual Social Cognition Tasks in Women with Turner Syndrome.

    PubMed

    Anaki, David; Zadikov-Mor, Tal; Gepstein, Vardit; Hochberg, Ze'ev

    2018-01-01

    Turner syndrome (TS) is a chromosomal disorder in women resulting from a partial or complete absence of the X chromosome. In addition to physical and hormonal dysfunctions, along with a unique neurocognitive profile, women with TS are reported to suffer from social functioning difficulties. Yet, it is unclear whether these difficulties stem from impairments in social cognition per se or from other deficits that characterize TS but are not specific to social cognition. Previous research that has probed social functioning in TS is equivocal regarding the source of these psychosocial problems since they have mainly used tasks that were dependent on visual-spatial skills, which are known to be compromised in TS. In the present study, we tested 26 women with TS and 26 matched participants on three social cognition tasks that did not require any visual-spatial capacities but rather relied on auditory-verbal skills. The results revealed that in all three tasks the TS participants did not differ from their control counterparts. The same TS cohort was found, in an earlier study, to be impaired, relative to controls, in other social cognition tasks that were dependent on visual-spatial skills. Taken together these findings suggest that the social problems, documented in TS, may be related to non-specific spatial-visual factors that affect their social cognition skills.

  5. A cognitive task analysis of a visual analytic workflow: Exploring molecular interaction networks in systems biology.

    PubMed

    Mirel, Barbara; Eichinger, Felix; Keller, Benjamin J; Kretzler, Matthias

    2011-03-21

    Bioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease? From our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow. Our results imply that visualizations should make available to scientific users “bundles of features” consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.

  6. A Multi-Area Stochastic Model for a Covert Visual Search Task.

    PubMed

    Schwemmer, Michael A; Feng, Samuel F; Holmes, Philip J; Gottlieb, Jacqueline; Cohen, Jonathan D

    2015-01-01

    Decisions typically comprise several elements. For example, attention must be directed towards specific objects, their identities recognized, and a choice made among alternatives. Pairs of competing accumulators and drift-diffusion processes provide good models of evidence integration in two-alternative perceptual choices, but more complex tasks requiring the coordination of attention and decision making involve multistage processing and multiple brain areas. Here we consider a task in which a target is located among distractors and its identity reported by lever release. The data comprise reaction times, accuracies, and single unit recordings from two monkeys' lateral interparietal area (LIP) neurons. LIP firing rates distinguish between targets and distractors, exhibit stimulus set size effects, and show response-hemifield congruence effects. These data motivate our model, which uses coupled sets of leaky competing accumulators to represent processes hypothesized to occur in feature-selective areas and limb motor and pre-motor areas, together with the visual selection process occurring in LIP. Model simulations capture the electrophysiological and behavioral data, and fitted parameters suggest that different connection weights between LIP and the other cortical areas may account for the observed behavioral differences between the animals.

  7. Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters

    PubMed Central

    Mittag, Maria; Inauri, Karina; Huovilainen, Tatu; Leminen, Miika; Salo, Emma; Rinne, Teemu; Kujala, Teija; Alho, Kimmo

    2013-01-01

    We used event-related brain potentials (ERPs) to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological (AP) and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement (Pd) of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the AP and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity (RP) to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving left-ear syllables

  8. Performance in a Visual Search Task Uniquely Predicts Reading Abilities in Third-Grade Hong Kong Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Chung, Kevin K. H.

    2015-01-01

    This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…

  9. Sonification and haptic feedback in addition to visual feedback enhances complex motor task learning.

    PubMed

    Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter

    2015-03-01

    Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.

  10. A dual-task investigation of automaticity in visual word processing

    NASA Technical Reports Server (NTRS)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  11. Visual Tasks and Postural Sway in Children with and without Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Chang, Chih-Hui; Wade, Michael G.; Stoffregen, Thomas A.; Hsu, Chin-Yu; Pan, Chien-Yu

    2010-01-01

    We investigated the influences of two different suprapostural visual tasks, visual searching and visual inspection, on the postural sway of children with and without autism spectrum disorder (ASD). Sixteen ASD children (age=8.75 [plus or minus] 1.34 years; height=130.34 [plus or minus] 11.03 cm) were recruited from a local support group.…

  12. Visual pathways from the perspective of cost functions and multi-task deep neural networks.

    PubMed

    Scholte, H Steven; Losch, Max M; Ramakrishnan, Kandan; de Haan, Edward H F; Bohte, Sander M

    2018-01-01

    Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Visual selective attention and reading efficiency are related in children.

    PubMed

    Casco, C; Tressoldi, P E; Dellantonio, A

    1998-09-01

    We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.

  14. Effects of Field of View and Visual Complexity on Virtual Reality Training Effectiveness for a Visual Scanning Task

    DOE PAGES

    Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis; ...

    2015-02-13

    Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less

  15. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli.

    PubMed

    Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio

    2016-05-15

    When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Nimodipine alters acquisition of a visual discrimination task in chicks.

    PubMed

    Deyo, R; Panksepp, J; Conner, R L

    1990-03-01

    Chicks 5 days old received intraperitoneal injections of nimodipine 30 min before training on either a visual discrimination task (0, 0.5, 1.0, or 5.0 mg/kg) or a test of separation-induced distress vocalizations (0, 0.5, or 2.5 mg/kg). Chicks receiving 1.0 mg/kg nimodipine made significantly fewer visual discrimination errors than vehicle controls by trials 41-60, but did not differ from controls 24 h later. Chicks in the 5 mg/kg group made significantly more errors when compared to controls both during acquisition of the task and during retention. Nimodipine did not alter separation-induced distress vocalizations at any of the doses tested, suggesting that nimodipine's effects on learning cannot be attributed to a reduction in separation distress. These data indicate that nimodipine's facilitation of learning in young subjects is dose dependent, but nimodipine failed to enhance retention.

  17. Coherent visualization of spatial data adapted to roles, tasks, and hardware

    NASA Astrophysics Data System (ADS)

    Wagner, Boris; Peinsipp-Byma, Elisabeth

    2012-06-01

    Modern crisis management requires that users with different roles and computer environments have to deal with a high volume of various data from different sources. For this purpose, Fraunhofer IOSB has developed a geographic information system (GIS) which supports the user depending on available data and the task he has to solve. The system provides merging and visualization of spatial data from various civilian and military sources. It supports the most common spatial data standards (OGC, STANAG) as well as some proprietary interfaces, regardless if these are filebased or database-based. To set the visualization rules generic Styled Layer Descriptors (SLDs) are used, which are an Open Geospatial Consortium (OGC) standard. SLDs allow specifying which data are shown, when and how. The defined SLDs consider the users' roles and task requirements. In addition it is possible to use different displays and the visualization also adapts to the individual resolution of the display. Too high or low information density is avoided. Also, our system enables users with different roles to work together simultaneously using the same data base. Every user is provided with the appropriate and coherent spatial data depending on his current task. These so refined spatial data are served via the OGC services Web Map Service (WMS: server-side rendered raster maps), or the Web Map Tile Service - (WMTS: pre-rendered and cached raster maps).

  18. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-08-28

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.

  19. Perceptual learning in visual search: fast, enduring, but non-specific.

    PubMed

    Sireteanu, R; Rettenbach, R

    1995-07-01

    Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.

  20. Choosing Your Poison: Optimizing Simulator Visual System Selection as a Function of Operational Tasks

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Kaiser, Mary K.

    2013-01-01

    Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.

  1. Brain activation in teenagers with isolated spelling disorder during tasks involving spelling assessment and comparison of pseudowords. fMRI study.

    PubMed

    Borkowska, Aneta Rita; Francuz, Piotr; Soluch, Paweł; Wolak, Tomasz

    2014-10-01

    The present study aimed at defining the specific traits of brain activation in teenagers with isolated spelling disorder in comparison with good spellers. fMRI examination was performed where the subject's task involved taking a decision 1/whether the visually presented words were spelled correctly or not (the orthographic decision task), and 2/whether the two presented letters strings (pseudowords) were identical or not (the visual decision task). Half of the displays showing meaningful words with an orthographic difficulty contained pairs with both words spelled correctly, and half of them contained one misspelled word. Half of the pseudowords were identical, half of them were not. The participants of the study included 15 individuals with isolated spelling disorder and 14 good spellers, aged 13-15. The results demonstrated that the essential differences in brain activation between teenagers with isolated spelling disorder and good spellers were found in the left inferior frontal gyrus, left medial frontal gyrus and right cerebellum posterior lobe, i.e. structures important for language processes, working memory and automaticity of behaviour. Spelling disorder is not only an effect of language dysfunction, it could be a symptom of difficulties in learning and automaticity of motor and visual shapes of written words, rapid information processing as well as automating use of orthographic lexicon. Copyright © 2013 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  2. Influence of age, speed and duration of monotonous driving task in traffic on the driver's useful visual field.

    PubMed

    Rogé, Joceline; Pébayle, Thierry; Lambilliotte, Elina; Spitzenstetter, Florence; Giselbrecht, Danièle; Muzet, Alain

    2004-10-01

    Recent research has shown that the useful visual field deteriorates in simulated car driving when the latter can induce a decrease in the level of activation. The first aim of this study was to verify if the same phenomenon occurs when driving is performed in a simulated road traffic situation. The second aim was to discover if this field also deteriorates as a function of the driver's age and of the vehicle's speed. Nine young drivers (from 22 to 34 years) and nine older drivers (from 46 to 59 years) followed a vehicle in road traffic during two two-hour sessions. The car-following task involved driving at 90 km.h(-1) (speed limit on road in France) in one session and at 130 km.h(-1) (speed limit on motorway in France) in the other session. While following the vehicle, the driver had to detect the changes in colour of a luminous signal located in the central part of his/her visual field and a visual signal that appeared at different eccentricities on the rear lights of the vehicles in the traffic. The analysis of the data indicates that the useful visual field deteriorates with the prolongation of the monotonous simulated driving task, with the driver's age and with the vehicle's speed. The results are discussed in terms of general interference and tunnel vision.

  3. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  4. Individual personality differences in goats predict their performance in visual learning and non-associative cognitive tasks.

    PubMed

    Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G

    2017-01-01

    Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Do Visual Processing Deficits Cause Problem on Response Time Task for Dyslexics?

    ERIC Educational Resources Information Center

    Sigmundsson, H.

    2005-01-01

    This study was set out to explore the prediction that dyslexics would be likely to have particular problems compared to control group, on response time task when 'driving' a car simulator. The reason for doing so stems from the fact that there is considerable body of research on visual processing difficulties manifested by dyslexics. The task was…

  6. The Emergence of Visual Awareness: Temporal Dynamics in Relation to Task and Mask Type

    PubMed Central

    Kiefer, Markus; Kammer, Thomas

    2017-01-01

    One aspect of consciousness phenomena, the temporal emergence of visual awareness, has been subject of a controversial debate. How can visual awareness, that is the experiential quality of visual stimuli, be characterized best? Is there a sharp discontinuous or dichotomous transition between unaware and fully aware states, or does awareness emerge gradually encompassing intermediate states? Previous studies yielded conflicting results and supported both dichotomous and gradual views. It is well conceivable that these conflicting results are more than noise, but reflect the dynamic nature of the temporal emergence of visual awareness. Using a psychophysical approach, the present research tested whether the emergence of visual awareness is context-dependent with a temporal two-alternative forced choice task. During backward masking of word targets, it was assessed whether the relative temporal sequence of stimulus thresholds is modulated by the task (stimulus presence, letter case, lexical decision, and semantic category) and by mask type. Four masks with different similarity to the target features were created. Psychophysical functions were then fitted to the accuracy data in the different task conditions as a function of the stimulus mask SOA in order to determine the inflection point (conscious threshold of each feature) and slope of the psychophysical function (transition from unaware to aware within each feature). Depending on feature-mask similarity, thresholds in the different tasks were highly dispersed suggesting a graded transition from unawareness to awareness or had less differentiated thresholds indicating that clusters of features probed by the tasks quite simultaneously contribute to the percept. The latter observation, although not compatible with the notion of a sharp all-or-none transition between unaware and aware states, suggests a less gradual or more discontinuous emergence of awareness. Analyses of slopes of the fitted psychophysical functions

  7. Stimulus novelty, task relevance and the visual evoked potential in man

    NASA Technical Reports Server (NTRS)

    Courchesne, E.; Hillyard, S. A.; Galambos, R.

    1975-01-01

    The effect of task relevance on P3 (waveform of human evoked potential) waves and the methodologies used to deal with them are outlined. Visual evoked potentials (VEPs) were recorded from normal adult subjects performing in a visual discrimination task. Subjects counted the number of presentations of the numeral 4 which was interposed rarely and randomly within a sequence of tachistoscopically flashed background stimuli. Intrusive, task-irrelevant (not counted) stimuli were also interspersed rarely and randomly in the sequence of 2s; these stimuli were of two types: simples, which were easily recognizable, and novels, which were completely unrecognizable. It was found that the simples and the counted 4s evoked posteriorly distributed P3 waves while the irrelevant novels evoked large, frontally distributed P3 waves. These large, frontal P3 waves to novels were also found to be preceded by large N2 waves. These findings indicate that the P3 wave is not a unitary phenomenon but should be considered in terms of a family of waves, differing in their brain generators and in their psychological correlates.

  8. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  9. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  10. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  11. Problem Behavior and Developmental Tasks in Adolescents with Visual Impairment and Sighted Peers

    ERIC Educational Resources Information Center

    Pfeiffer, Jens P.; Pinquart, Martin

    2013-01-01

    This longitudinal study analyzed associations of problem behavior with the attainment of developmental tasks in 133 adolescents with visual impairment and 449 sighted peers. Higher levels of initial problem behavior predicted less progress in the attainment of developmental tasks at the one-year follow-up only in sighted adolescents. This…

  12. Analyzing Web pages visual scanpaths: between and within tasks variability.

    PubMed

    Drusch, Gautier; Bastien, J M Christian

    2012-01-01

    In this paper, we propose a new method for comparing scanpaths in a bottom-up approach, and a test of the scanpath theory. To do so, we conducted a laboratory experiment in which 113 participants were invited to accomplish a set of tasks on two different websites. For each site, they had to perform two tasks that had to be repeated ounce. The data were analyzed using a procedure similar to the one used by Duchowski et al. [8]. The first step was to automatically identify, then label, AOIs with the mean-shift clustering procedure [19]. Then, scanpaths were compared two by two with a modified version of the string-edit method, which take into account the order of AOIs visualizations [2]. Our results show that scanpaths variability between tasks but within participants seems to be lower than the variability within task for a given participant. In other words participants seem to be more coherent when they perform different tasks, than when they repeat the same tasks. In addition, participants view more of the same AOI when they perform a different task on the same Web page than when they repeated the same task. These results are quite different from what predicts the scanpath theory.

  13. Visual tasks and postural sway in children with and without autism spectrum disorders.

    PubMed

    Chang, Chih-Hui; Wade, Michael G; Stoffregen, Thomas A; Hsu, Chin-Yu; Pan, Chien-Yu

    2010-01-01

    We investigated the influences of two different suprapostural visual tasks, visual searching and visual inspection, on the postural sway of children with and without autism spectrum disorder (ASD). Sixteen ASD children (age=8.75±1.34 years; height=130.34±11.03 cm) were recruited from a local support group. Individuals with an intellectual disability as a co-occurring condition and those with severe behavior problems that required formal intervention were excluded. Twenty-two sex- and age-matched typically developing (TD) children (age=8.93±1.39 years; height=133.47±8.21 cm) were recruited from a local public elementary school. Postural sway was recorded using a magnetic tracking system (Flock of Birds, Ascension Technologies, Inc., Burlington, VT). Results indicated that the ASD children exhibited greater sway than the TD children. Despite this difference, both TD and ASD children showed reduced sway during the search task, relative to sway during the inspection task. These findings replicate those of Stoffregen et al. (2000), Stoffregen, Giveans, et al. (2009), Stoffregen, Villard, et al. (2009) and Prado et al. (2007) and extend them to TD children as well as ASD children. Both TD and ASD children were able to functionally modulate postural sway to facilitate the performance of a task that required higher perceptual effort. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. The Modulation of Visual and Task Characteristics of a Writing System on Hemispheric Lateralization in Visual Word Recognition--A Computational Exploration

    ERIC Educational Resources Information Center

    Hsiao, Janet H.; Lam, Sze Man

    2013-01-01

    Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…

  15. A comparison of kinesthetic-tactual and visual displays via a critical tracking task. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Miller, D. P.; Gilson, R. D.

    1979-01-01

    The feasibility of using the critical tracking task to evaluate kinesthetic-tactual displays was examined. The test subjects were asked to control a first-order unstable system with a continuously decreasing time constant by using either visual or tactual unidimensional displays. The results indicate that the critical tracking task is both a feasible and a reliable methodology for assessing tactual tracking. Further, that the critical tracking methodology is as sensitive and valid a measure of tactual tracking as visual tracking is demonstrated by the approximately equal effects of quickening for the tactual and visual displays.

  16. The effect of changing the secondary task in dual-task paradigms for measuring listening effort.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2014-01-01

    revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.

  17. Reduced posterior parietal cortex activation after training on a visual search task.

    PubMed

    Bueichekú, Elisenda; Miró-Padilla, Anna; Palomar-García, María-Ángeles; Ventura-Campos, Noelia; Parcet, María-Antonia; Barrós-Loscertales, Alfonso; Ávila, César

    2016-07-15

    Gaining experience on a cognitive task improves behavioral performance and is thought to enhance brain efficiency. Despite the body of literature already published on the effects of training on brain activation, less research has been carried out on visual search attention processes under well controlled conditions. Thirty-six healthy adults divided into trained and control groups completed a pre-post letter-based visual search task fMRI study in one day. Twelve letters were used as targets and ten as distractors. The trained group completed a training session (840 trials) with half the targets between scans. The effects of training were studied at the behavioral and brain levels by controlling for repetition effects using both between-subjects (trained vs. control groups) and within-subject (trained vs. untrained targets) controls. The trained participants reduced their response speed by 31% as a result of training, maintaining their accuracy scores, whereas the control group hardly changed. Neural results revealed that brain changes associated with visual search training were circumscribed to reduced activation in the posterior parietal cortex (PPC) when controlling for group, and they included inferior occipital areas when controlling for targets. The observed behavioral and brain changes are discussed in relation to automatic behavior development. The observed training-related decreases could be associated with increased neural efficiency in specific key regions for task performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Strength of figure-ground activity in monkey primary visual cortex predicts saccadic reaction time in a delayed detection task.

    PubMed

    Supèr, Hans; Lamme, Victor A F

    2007-06-01

    When and where are decisions made? In the visual system a saccade, which is a fast shift of gaze toward a target in the visual scene, is the behavioral outcome of a decision. Current neurophysiological data and reaction time models show that saccadic reaction times are determined by a build-up of activity in motor-related structures, such as the frontal eye fields. These structures depend on the sensory evidence of the stimulus. Here we use a delayed figure-ground detection task to show that late modulated activity in the visual cortex (V1) predicts saccadic reaction time. This predictive activity is part of the process of figure-ground segregation and is specific for the saccade target location. These observations indicate that sensory signals are directly involved in the decision of when and where to look.

  19. Does proactive interference play a significant role in visual working memory tasks?

    PubMed

    Makovski, Tal

    2016-10-01

    Visual working memory (VWM) is an online memory buffer that is typically assumed to be immune to source memory confusions. Accordingly, the few studies that have investigated the role of proactive interference (PI) in VWM tasks found only a modest PI effect at best. In contrast, a recent study has found a substantial PI effect in that performance in a VWM task was markedly improved when all memory items were unique compared to the more standard condition in which only a limited set of objects was used. The goal of the present study was to reconcile this discrepancy between the findings, and to scrutinize the extent to which PI is involved in VWM tasks. Experiments 1-2 showed that the robust advantage in using unique memory items can also be found in a within-subject design and is largely independent of set size, encoding duration, or intertrial interval. Importantly, however, PI was found mainly when all items were presented at the same location, and the effect was greatly diminished when the items were presented, either simultaneously (Experiment 3) or sequentially (Experiments 4-5), at distinct locations. These results indicate that PI is spatially specific and that without the assistance of spatial information VWM is not protected from PI. Thus, these findings imply that spatial information plays a key role in VWM, and underscore the notion that VWM is more vulnerable to interference than is typically assumed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Eye vergence responses during a visual memory task.

    PubMed

    Solé Puig, Maria; Romeo, August; Cañete Crespillo, Jose; Supèr, Hans

    2017-02-08

    In a previous report it was shown that covertly attending visual stimuli produce small convergence of the eyes, and that visual stimuli can give rise to different modulations of the angle of eye vergence, depending on their power to capture attention. Working memory is highly dependent on attention. Therefore, in this study we assessed vergence responses in a memory task. Participants scanned a set of 8 or 12 images for 10 s, and thereafter were presented with a series of single images. One half were repeat images - that is, they belonged to the initial set - and the other half were novel images. Participants were asked to indicate whether or not the images were included in the initial image set. We observed that eyes converge during scanning the set of images and during the presentation of the single images. The convergence was stronger for remembered images compared with the vergence for nonremembered images. Modulation in pupil size did not correspond to behavioural responses. The correspondence between vergence and coding/retrieval processes of memory strengthen the idea of a role for vergence in attention processing of visual information.

  1. Long-term Recurrent Convolutional Networks for Visual Recognition and Description

    DTIC Science & Technology

    2014-11-17

    deep???, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large...models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent...limitation of simple RNN models which strictly integrate state information over time is known as the “vanishing gradient” effect : the ability to

  2. The course of visual searching to a target in a fixed location: electrophysiological evidence from an emotional flanker task.

    PubMed

    Dong, Guangheng; Yang, Lizhu; Shen, Yue

    2009-08-21

    The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.

  3. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    PubMed

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  4. A visual processing advantage for young-adolescent deaf observers: Evidence from face and object matching tasks

    PubMed Central

    Megreya, Ahmed M.; Bindemann, Markus

    2017-01-01

    It is unresolved whether the permanent auditory deprivation that deaf people experience leads to the enhanced visual processing of faces. The current study explored this question with a matching task in which observers searched for a target face among a concurrent lineup of ten faces. This was compared with a control task in which the same stimuli were presented upside down, to disrupt typical face processing, and an object matching task. A sample of young-adolescent deaf observers performed with higher accuracy than hearing controls across all of these tasks. These results clarify previous findings and provide evidence for a general visual processing advantage in deaf observers rather than a face-specific effect. PMID:28117407

  5. The time-course of activation in the dorsal and ventral visual streams during landmark cueing and perceptual discrimination tasks.

    PubMed

    Lambert, Anthony J; Wootton, Adrienne

    2017-08-01

    Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Selective visual attention and motivation: the consequences of value learning in an attentional blink task.

    PubMed

    Raymond, Jane E; O'Brien, Jennifer L

    2009-08-01

    Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.

  7. Illusory conjunctions and perceptual grouping in a visual search task in schizophrenia.

    PubMed

    Carr, V J; Dewis, S A; Lewin, T J

    1998-07-27

    This report describes part of a series of experiments, conducted within the framework of feature integration theory, to determine whether patients with schizophrenia show deficits in preattentive processing. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing the frequency of illusory conjunctions (i.e. false perceptions) under conditions of divided attention (Experiment 3) and a task which examined the effects of perceptual grouping on illusory conjunctions (Experiment 4). We also assessed current symptomatology and its relationship to task performance. Contrary to our hypotheses, schizophrenia subjects did not show higher rates of illusory conjunctions, and the influence of perceptual grouping on the frequency of illusory conjunctions was similar for schizophrenia and control subjects. Nonetheless, specific predictions from feature integration theory about the impact of different target types (Experiment 3) and perceptual groups (Experiment 4) on the likelihood of forming an illusory conjunction were strongly supported, thereby confirming the integrity of the experimental procedures. Overall, these studies revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics.

  8. Emotional metacontrol of attention: Top-down modulation of sensorimotor processes in a robotic visual search task.

    PubMed

    Belkaid, Marwen; Cuperlier, Nicolas; Gaussier, Philippe

    2017-01-01

    Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.

  9. Measuring the effects of a visual or auditory Stroop task on dual-task costs during obstacle crossing.

    PubMed

    Worden, Timothy A; Mendes, Matthew; Singh, Pratham; Vallis, Lori Ann

    2016-10-01

    Successful planning and execution of motor strategies while concurrently performing a cognitive task has been previously examined, but unfortunately the varied and numerous cognitive tasks studied has limited our fundamental understanding of how the central nervous system successfully integrates and executes these tasks simultaneously. To gain a better understanding of these mechanisms we used a set of cognitive tasks requiring similar central executive function processes and response outputs but requiring different perceptual mechanisms to perform the motor task. Thirteen healthy young adults (20.6±1.6years old) were instrumented with kinematic markers (60Hz) and completed 5 practice, 10 single-task obstacle walking trials and two 40 trial experimental blocks. Each block contained 20 trials of seated (single-task) trials followed by 20 cognitive and obstacle (30% lower leg length) crossing trials (dual-task). Blocks were randomly presented and included either an auditory Stroop task (AST; central interference only) or a visual Stroop task (VST; combined central and structural interference). Higher accuracy rates and shorter response times were observed for the VST versus AST single-task trials (p<0.05). Conversely, for the obstacle stepping performance, larger dual task costs were observed for the VST as compared to the AST for clearance measures (the VST induced larger clearance values for both the leading and trailing feet), indicating VST tasks caused greater interference for obstacle crossing (p<0.05). These results supported the hypothesis that structural interference has a larger effect on motor performance in a dual-task situation compared to cognitive tasks that pose interference at only the central processing stage. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Visual Processing on Graphics Task: The Case of a Street Map

    ERIC Educational Resources Information Center

    Logan, Tracy; Lowrie, Tom

    2013-01-01

    Tracy Logan and Tom Lowrie argue that while little attention is given to visual imagery and spatial reasoning within the Australian Curriculum, a significant proportion of National Assessment Program--Literacy and Numeracy (NAPLAN) tasks require high levels of visuospatial reasoning. This article includes teaching ideas to promote visuospatial…

  11. Dynamics of cortico-subcortical cross-modal operations involved in audio-visual object detection in humans.

    PubMed

    Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène

    2002-10-01

    Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.

  12. Task relevance of emotional information affects anxiety-linked attention bias in visual search.

    PubMed

    Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies

    2017-01-01

    Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Eye movements and postural control in dyslexic children performing different visual tasks.

    PubMed

    Razuk, Milena; Barela, José Angelo; Peyre, Hugo; Gerard, Christophe Loic; Bucci, Maria Pia

    2018-01-01

    The aim of this study was to examine eye movements and postural control performance among dyslexic children while reading a text and performing the Landolt reading task. Fifteen dyslexic and 15 non-dyslexic children were asked to stand upright while performing two experimental visual tasks: text reading and Landolt reading. In the text reading task, children were asked to silently read a text displayed on a monitor, while in the Landolt reading task, the letters in the text were replaced by closed circles and Landolt rings, and children were asked to scan each circle/ring in a reading-like fashion, from left to right, and to count the number of Landolt rings. Eye movements (Mobile T2®, SuriCog) and center of pressure excursions (Framiral®, Grasse, France) were recorded. Visual performance variables were total reading time, mean duration of fixation, number of pro- and retro-saccades, and amplitude of pro-saccades. Postural performance variable was the center of pressure area. The results showed that dyslexic children spent more time reading the text and had a longer duration of fixation than non-dyslexic children. However, no difference was observed between dyslexic and non-dyslexic children in the Landolt reading task. Dyslexic children performed a higher number of pro- and retro-saccades than non-dyslexic children in both text reading and Landolt reading tasks. Dyslexic children had smaller pro-saccade amplitude than non-dyslexic children in the text reading task. Finally, postural performance was poorer in dyslexic children than in non-dyslexic children. Reading difficulties in dyslexic children are related to eye movement strategies required to scan and obtain lexical and semantic meaning. However, postural control performance, which was poor in dyslexic children, is not related to lexical and semantic reading requirements and might not also be related to different eye movement behavior.

  14. Transfer of an induced preferred retinal locus of fixation to everyday life visual tasks.

    PubMed

    Barraza-Bernal, Maria J; Rifai, Katharina; Wahl, Siegfried

    2017-12-01

    Subjects develop a preferred retinal locus of fixation (PRL) under simulation of central scotoma. If systematic relocations are applied to the stimulus position, PRLs manifest at a location in favor of the stimulus relocation. The present study investigates whether the induced PRL is transferred to important visual tasks in daily life, namely pursuit eye movements, signage reading, and text reading. Fifteen subjects with normal sight participated in the study. To develop a PRL, all subjects underwent a scotoma simulation in a prior study, where five subjects were trained to develop the PRL in the left hemifield, five different subjects on the right hemifield, and the remaining five subjects could naturally chose the PRL location. The position of this PRL was used as baseline. Under central scotoma simulation, subjects performed a pursuit task, a signage reading task, and a reading-text task. In addition, retention of the behavior was also studied. Results showed that the PRL position was transferred to the pursuit task and that the vertical location of the PRL was maintained on the text reading task. However, when reading signage, a function-driven change in PRL location was observed. In addition, retention of the PRL position was observed over weeks and months. These results indicate that PRL positions can be induced and may further transferred to everyday life visual tasks, without hindering function-driven changes in PRL position.

  15. Reduced dual-task gait speed is associated with visual Go/No-Go brain network activation in children and adolescents with concussion.

    PubMed

    Howell, David R; Meehan, William P; Barber Foss, Kim D; Reches, Amit; Weiss, Michal; Myer, Gregory D

    2018-05-31

    To investigate the association between dual-task gait performance and brain network activation (BNA) using an electroencephalography (EEG)-based Go/No-Go paradigm among children and adolescents with concussion. Participants with a concussion completed a visual Go/No-Go task with collection of electroencephalogram brain activity. Data were treated with BNA analysis, which involves an algorithmic approach to EEG-ERP activation quantification. Participants also completed a dual-task gait assessment. The relationship between dual-task gait speed and BNA was assessed using multiple linear regression models. Participants (n = 20, 13.9 ± 2.3 years of age, 50% female) were tested at a mean of 7.0 ± 2.5 days post-concussion and were symptomatic at the time of testing (post-concussion symptom scale = 40.4 ± 21.9). Slower dual-task average gait speed (mean = 82.2 ± 21.0 cm/s) was significantly associated with lower relative time BNA scores (mean = 39.6 ± 25.8) during the No-Go task (β = 0.599, 95% CI = 0.214, 0.985, p = 0.005, R 2  = 0.405), while controlling for the effect of age and gender. Among children and adolescents with a concussion, slower dual-task gait speed was independently associated with lower BNA relative time scores during a visual Go/No-Go task. The relationship between abnormal gait behaviour and brain activation deficits may be reflective of disruption to multiple functional abilities after concussion.

  16. Neck/shoulder discomfort due to visually demanding experimental near work is influenced by previous neck pain, task duration, astigmatism, internal eye discomfort and accommodation

    PubMed Central

    Forsman, Mikael; Richter, Hans O.

    2017-01-01

    Visually demanding near work can cause eye discomfort, and eye and neck/shoulder discomfort during, e.g., computer work are associated. To investigate direct effects of experimental near work on eye and neck/shoulder discomfort, 33 individuals with chronic neck pain and 33 healthy control subjects performed a visual task four times using four different trial lenses (referred to as four different viewing conditions), and they rated eye and neck/shoulder discomfort at baseline and after each task. Since symptoms of eye discomfort may differ depending on the underlying cause, two categories were used; internal eye discomfort, such as ache and strain, that may be caused by accommodative or vergence stress; and external eye discomfort, such as burning and smarting, that may be caused by dry-eye disorders. The cumulative performance time (reflected in the temporal order of the tasks), astigmatism, accommodation response and concurrent symptoms of internal eye discomfort all aggravated neck/shoulder discomfort, but there was no significant effect of external eye discomfort. There was also an interaction effect between the temporal order and internal eye discomfort: participants with a greater mean increase in internal eye discomfort also developed more neck/shoulder discomfort with time. Since moderate musculoskeletal symptoms are a risk factor for more severe symptoms, it is important to ensure a good visual environment in occupations involving visually demanding near work. PMID:28832612

  17. Neck/shoulder discomfort due to visually demanding experimental near work is influenced by previous neck pain, task duration, astigmatism, internal eye discomfort and accommodation.

    PubMed

    Zetterberg, Camilla; Forsman, Mikael; Richter, Hans O

    2017-01-01

    Visually demanding near work can cause eye discomfort, and eye and neck/shoulder discomfort during, e.g., computer work are associated. To investigate direct effects of experimental near work on eye and neck/shoulder discomfort, 33 individuals with chronic neck pain and 33 healthy control subjects performed a visual task four times using four different trial lenses (referred to as four different viewing conditions), and they rated eye and neck/shoulder discomfort at baseline and after each task. Since symptoms of eye discomfort may differ depending on the underlying cause, two categories were used; internal eye discomfort, such as ache and strain, that may be caused by accommodative or vergence stress; and external eye discomfort, such as burning and smarting, that may be caused by dry-eye disorders. The cumulative performance time (reflected in the temporal order of the tasks), astigmatism, accommodation response and concurrent symptoms of internal eye discomfort all aggravated neck/shoulder discomfort, but there was no significant effect of external eye discomfort. There was also an interaction effect between the temporal order and internal eye discomfort: participants with a greater mean increase in internal eye discomfort also developed more neck/shoulder discomfort with time. Since moderate musculoskeletal symptoms are a risk factor for more severe symptoms, it is important to ensure a good visual environment in occupations involving visually demanding near work.

  18. Visual Puzzles, Figure Weights, and Cancellation: Some Preliminary Hypotheses on the Functional and Neural Substrates of These Three New WAIS-IV Subtests

    PubMed Central

    McCrea, Simon M.; Robinson, Thomas P.

    2011-01-01

    In this study, five consecutive patients with focal strokes and/or cortical excisions were examined with the Wechsler Adult Intelligence Scale and Wechsler Memory Scale—Fourth Editions along with a comprehensive battery of other neuropsychological tasks. All five of the lesions were large and typically involved frontal, temporal, and/or parietal lobes and were lateralized to one hemisphere. The clinical case method was used to determine the cognitive neuropsychological correlates of mental rotation (Visual Puzzles), Piagetian balance beam (Figure Weights), and visual search (Cancellation) tasks. The pattern of results on Visual Puzzles and Figure Weights suggested that both subtests involve predominately right frontoparietal networks involved in visual working memory. It appeared that Visual Puzzles could also critically rely on the integrity of the left temporoparietal junction. The left temporoparietal junction could be involved in temporal ordering and integration of local elements into a nonverbal gestalt. In contrast, the Figure Weights task appears to critically involve the right temporoparietal junction involved in numerical magnitude estimation. Cancellation was sensitive to left frontotemporal lesions and not right posterior parietal lesions typical of other visual search tasks. In addition, the Cancellation subtest was sensitive to verbal search strategies and perhaps object-based attention demands, thereby constituting a unique task in comparison with previous visual search tasks. PMID:22389807

  19. A Comparison of the Visual Attention Patterns of People With Aphasia and Adults Without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes.

    PubMed

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-04-01

    The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.

  20. Correlation between observation task performance and visual acuity, contrast sensitivity and environmental light in a simulated maritime study.

    PubMed

    Koefoed, Vilhelm F; Assmuss, Jörg; Høvding, Gunnar

    2018-03-25

    To examine the relevance of visual acuity (VA) and index of contrast sensitivity (ICS) as predictors for visual observation task performance in a maritime environment. Sixty naval cadets were recruited to a study on observation tasks in a simulated maritime environment under three different light settings. Their ICS were computed based on contrast sensitivity (CS) data recorded by Optec 6500 and CSV-1000E CS tests. The correlation between object identification distance and VA/ICS was examined by stepwise linear regression. The object detection distance was significantly correlated to the level of environmental light (p < 0.001), but not to the VA or ICS recorded in the test subjects. Female cadets had a significantly shorter target identification range than the male cadets. Neither CS nor VA were found to be significantly correlated to observation task performance. This apparent absence of proven predictive value of visual parameters for observation tasks in a maritime environment may presumably be ascribed to the normal and uniform visual capacity in all our study subjects. © 2018 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  1. Affective ERP Processing in a Visual Oddball Task: Arousal, Valence, and Gender

    PubMed Central

    Rozenkrants, Bella; Polich, John

    2008-01-01

    Objective To assess affective event-related brain potentials (ERPs) using visual pictures that were highly distinct on arousal level/valence category ratings and a response task. Methods Images from the International Affective Pictures System (IAPS) were selected to obtain distinct affective arousal (low, high) and valence (negative, positive) rating levels. The pictures were used as target stimuli in an oddball paradigm, with a visual pattern as the standard stimulus. Participants were instructed to press a button whenever a picture occurred and to ignore the standard. Task performance and response time did not differ across conditions. Results High-arousal compared to low-arousal stimuli produced larger amplitudes for the N2, P3, early slow wave, and late slow wave components. Valence amplitude effects were weak overall and originated primarily from the later waveform components and interactions with electrode position. Gender differences were negligible. Conclusion The findings suggest that arousal level is the primary determinant of affective oddball processing, and valence minimally influences ERP amplitude. Significance Affective processing engages selective attentional mechanisms that are primarily sensitive to the arousal properties of emotional stimuli. The application and nature of task demands are important considerations for interpreting these effects. PMID:18783987

  2. Understanding Language, Hearing Status, and Visual-Spatial Skills

    PubMed Central

    Marschark, Marc; Spencer, Linda J.; Durkin, Andreana; Borgna, Georgianna; Convertino, Carol; Machmer, Elizabeth; Kronenberger, William G.; Trani, Alexandra

    2015-01-01

    It is frequently assumed that deaf individuals have superior visual-spatial abilities relative to hearing peers and thus, in educational settings, they are often considered visual learners. There is some empirical evidence to support the former assumption, although it is inconsistent, and apparently none to support the latter. Three experiments examined visual-spatial and related cognitive abilities among deaf individuals who varied in their preferred language modality and use of cochlear implants (CIs) and hearing individuals who varied in their sign language skills. Sign language and spoken language assessments accompanied tasks involving visual-spatial processing, working memory, nonverbal logical reasoning, and executive function. Results were consistent with other recent studies indicating no generalized visual-spatial advantage for deaf individuals and suggested that their performance in that domain may be linked to the strength of their preferred language skills regardless of modality. Hearing individuals performed more strongly than deaf individuals on several visual-spatial and self-reported executive functioning measures, regardless of sign language skills or use of CIs. Findings are inconsistent with assumptions that deaf individuals are visual learners or are superior to hearing individuals across a broad range of visual-spatial tasks. Further, performance of deaf and hearing individuals on the same visual-spatial tasks was associated with differing cognitive abilities, suggesting that different cognitive processes may be involved in visual-spatial processing in these groups. PMID:26141071

  3. Spatiotemporal oscillatory dynamics of visual selective attention during a flanker task.

    PubMed

    McDermott, Timothy J; Wiesman, Alex I; Proskovec, Amy L; Heinrichs-Graham, Elizabeth; Wilson, Tony W

    2017-08-01

    The flanker task is a test of visual selective attention that has been widely used to probe error monitoring, response conflict, and related constructs. However, to date, few studies have focused on the selective attention component of this task and imaged the underlying oscillatory dynamics serving task performance. In this study, 21 healthy adults successfully completed an arrow-based version of the Eriksen flanker task during magnetoencephalography (MEG). All MEG data were pre-processed and transformed into the time-frequency domain. Significant oscillatory brain responses were imaged using a beamforming approach, and voxel time series were extracted from the peak responses to identify the temporal dynamics. Across both congruent and incongruent flanker conditions, our results indicated robust decreases in alpha (9-12Hz) activity in medial and lateral occipital regions, bilateral parietal cortices, and cerebellar areas during task performance. In parallel, increases in theta (3-7Hz) oscillatory activity were detected in dorsal and ventral frontal regions, and the anterior cingulate. As per conditional effects, stronger alpha responses (i.e., greater desynchronization) were observed in parietal, occipital, and cerebellar cortices during incongruent relative to congruent trials, whereas the opposite pattern emerged for theta responses (i.e., synchronization) in the anterior cingulate, left dorsolateral prefrontal, and ventral prefrontal cortices. Interestingly, the peak latency of theta responses in these latter brain regions was significantly correlated with reaction time, and may partially explain the amplitude difference observed between congruent and incongruent trials. Lastly, whole-brain exploratory analyses implicated the frontal eye fields, right temporoparietal junction, and premotor cortices. These findings suggest that regions of both the dorsal and ventral attention networks contribute to visual selective attention processes during incongruent trials

  4. Stimulus similarity determines the prevalence of behavioral laterality in a visual discrimination task for mice

    PubMed Central

    Treviño, Mario

    2014-01-01

    Animal choices depend on direct sensory information, but also on the dynamic changes in the magnitude of reward. In visual discrimination tasks, the emergence of lateral biases in the choice record from animals is often described as a behavioral artifact, because these are highly correlated with error rates affecting psychophysical measurements. Here, we hypothesized that biased choices could constitute a robust behavioral strategy to solve discrimination tasks of graded difficulty. We trained mice to swim in a two-alterative visual discrimination task with escape from water as the reward. Their prevalence of making lateral choices increased with stimulus similarity and was present in conditions of high discriminability. While lateralization occurred at the individual level, it was absent, on average, at the population level. Biased choice sequences obeyed the generalized matching law and increased task efficiency when stimulus similarity was high. A mathematical analysis revealed that strongly-biased mice used information from past rewards but not past choices to make their current choices. We also found that the amount of lateralized choices made during the first day of training predicted individual differences in the average learning behavior. This framework provides useful analysis tools to study individualized visual-learning trajectories in mice. PMID:25524257

  5. Emotional metacontrol of attention: Top-down modulation of sensorimotor processes in a robotic visual search task

    PubMed Central

    Cuperlier, Nicolas; Gaussier, Philippe

    2017-01-01

    Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks. PMID:28934291

  6. The right look for the job: decoding cognitive processes involved in the task from spatial eye-movement patterns.

    PubMed

    Król, Magdalena Ewa; Król, Michał

    2018-02-20

    The aim of the study was not only to demonstrate whether eye-movement-based task decoding was possible but also to investigate whether eye-movement patterns can be used to identify cognitive processes behind the tasks. We compared eye-movement patterns elicited under different task conditions, with tasks differing systematically with regard to the types of cognitive processes involved in solving them. We used four tasks, differing along two dimensions: spatial (global vs. local) processing (Navon, Cognit Psychol, 9(3):353-383 1977) and semantic (deep vs. shallow) processing (Craik and Lockhart, J Verbal Learn Verbal Behav, 11(6):671-684 1972). We used eye-movement patterns obtained from two time periods: fixation cross preceding the target stimulus and the target stimulus. We found significant effects of both spatial and semantic processing, but in case of the latter, the effect might be an artefact of insufficient task control. We found above chance task classification accuracy for both time periods: 51.4% for the period of stimulus presentation and 34.8% for the period of fixation cross presentation. Therefore, we show that task can be to some extent decoded from the preparatory eye-movements before the stimulus is displayed. This suggests that anticipatory eye-movements reflect the visual scanning strategy employed for the task at hand. Finally, this study also demonstrates that decoding is possible even from very scant eye-movement data similar to Coco and Keller, J Vis 14(3):11-11 (2014). This means that task decoding is not limited to tasks that naturally take longer to perform and yield multi-second eye-movement recordings.

  7. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research (VISRIDER) Program Task 6: Point Cloud Visualization Techniques for Desktop and Web Platforms

    DTIC Science & Technology

    2017-04-01

    ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms

  8. Altered visual strategies and attention are related to increased force fluctuations during a pinch grip task in older adults.

    PubMed

    Keenan, Kevin G; Huddleston, Wendy E; Ernest, Bradley E

    2017-11-01

    The purpose of the study was to determine the visual strategies used by older adults during a pinch grip task and to assess the relations between visual strategy, deficits in attention, and increased force fluctuations in older adults. Eye movements of 23 older adults (>65 yr) were monitored during a low-force pinch grip task while subjects viewed three common visual feedback displays. Performance on the Grooved Pegboard test and an attention task (which required no concurrent hand movements) was also measured. Visual strategies varied across subjects and depended on the type of visual feedback provided to the subjects. First, while viewing a high-gain compensatory feedback display (horizontal bar moving up and down with force), 9 of 23 older subjects adopted a strategy of performing saccades during the task, which resulted in 2.5 times greater force fluctuations in those that exhibited saccades compared with those who maintained fixation near the target line. Second, during pursuit feedback displays (force trace moving left to right across screen and up and down with force), all subjects exhibited multiple saccades, and increased force fluctuations were associated ( r s = 0.6; P = 0.002) with fewer saccades during the pursuit task. Also, decreased low-frequency (<4 Hz) force fluctuations and Grooved Pegboard times were significantly related ( P = 0.033 and P = 0.005, respectively) with higher (i.e., better) attention z scores. Comparison of these results with our previously published results in young subjects indicates that saccadic eye movements and attention are related to force control in older adults. NEW & NOTEWORTHY The significant contributions of the study are the addition of eye movement data and an attention task to explain differences in hand motor control across different visual displays in older adults. Older participants used different visual strategies across varying feedback displays, and saccadic eye movements were related with motor performance

  9. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  10. Exploring Metacogntive Visual Literacy Tasks for Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Slater, Timothy F.; Slater, S.; Dwyer, W.

    2010-01-01

    Undoubtedly, astronomy is a scientific enterprise which often results in colorful and inspirational images of the cosmos that naturally capture our attention. Students encountering astronomy in the college classroom are often bombarded with images, movies, simulations, conceptual cartoons, graphs, and charts intended to convey the substance and technological advancement inherent in astronomy. For students who self-identify themselves as visual learners, this aspect can make the science of astronomy come alive. For students who naturally attend to visual aesthetics, this aspect can make astronomy seem relevant. In other words, the visual nature that accompanies much of the scientific realm of astronomy has the ability to connect a wide range of students to science, not just those few who have great abilities and inclinations toward the mathematical analysis world. Indeed, this is fortunate for teachers of astronomy, who actively try to find ways to connect and build astronomical understanding with a broad range of student interests, motivations, and abilities. In the context of learning science, metacognition describes students’ self-monitoring, -regulation, and -awareness when thinking about learning. As such, metacognition is one of the foundational pillars supporting what we know about how people learn. Yet, the astronomy teaching and learning community knows very little about how to operationalize and support students’ metacognition in the classroom. In response, the Conceptual Astronomy, Physics and Earth sciences Research (CAPER) Team is developing and pilot-testing metacogntive tasks in the context of astronomy that focus on visual literacy of astronomical phenomena. In the initial versions, students are presented with a scientifically inaccurate narrative supposedly describing visual information, including images and graphical information, and asked to assess and correct the narrative, in the form of peer evaluation. To guide student thinking, students

  11. Rightward biases in free-viewing visual bisection tasks: implications for leftward responses biases on similar tasks.

    PubMed

    Elias, Lorin J; Robinson, Brent; Saucier, Deborah M

    2005-12-01

    Neurologically normal individuals exhibit strong leftward response biases during free-viewing perceptual judgments of brightness, quantity, and size. When participants view two mirror-reversed objects and they are forced to choose which object appears darker, more numerous, or larger, the stimulus with the relevant feature on the left side is chosen 60-75% of the time. This effect could be influenced by inaccurate judgments of the true centre-point of the objects being compared. In order to test this possibility, 10 participants completed three visual bisection tasks on stimuli known to elicit strong leftward response biases. Participants were monitored using a remote eye-tracking device and instructed to stare at the subjective midpoint of objects presented on a computer screen. Although it was predicted that bisection errors would deviate to the left of centre (as is the case in the line bisection literature), the opposite effect was found. Significant rightward bisection errors were evident on two of the three tasks, and the leftward biases seen during forced-choice tasks could be the result of misjudgments to the right of centre on these same tasks.

  12. Divided visual attention: A comparison of patients with multiple sclerosis and controls, assessed with an optokinetic nystagmus suppression task.

    PubMed

    Williams, Isla M; Schofield, Peter; Khade, Neha; Abel, Larry A

    2016-12-01

    Multiple sclerosis (MS) frequently causes impairment of cognitive function. We compared patients with MS with controls on divided visual attention tasks. The MS patients' and controls' stare optokinetic nystagmus (OKN) was recorded in response to a 24°/s full field stimulus. Suppression of the OKN response, judged by the gain, was measured during tasks dividing visual attention between the fixation target and a second stimulus, central or peripheral, static or dynamic. All participants completed the Audio Recorded Cognitive Screen. MS patients had lower gain on the baseline stare OKN. OKN suppression in divided attention tasks was the same in MS patients as in controls but in both groups was better maintained in static than in dynamic tasks. In only dynamic tasks, older age was associated with less effective OKN suppression. MS patients had lower scores on a timed attention task and on memory. There was no significant correlation between attention or memory and eye movement parameters. Attention, a complex multifaceted construct, has different neural combinations for each task. Despite impairments on some measures of attention, MS patients completed the divided visual attention tasks normally. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Selective visual processing across competition episodes: a theory of task-driven visual attention and working memory

    PubMed Central

    Schneider, Werner X.

    2013-01-01

    The goal of this review is to introduce a theory of task-driven visual attention and working memory (TRAM). Based on a specific biased competition model, the ‘theory of visual attention’ (TVA) and its neural interpretation (NTVA), TRAM introduces the following assumption. First, selective visual processing over time is structured in competition episodes. Within an episode, that is, during its first two phases, a limited number of proto-objects are competitively encoded—modulated by the current task—in activation-based visual working memory (VWM). In processing phase 3, relevant VWM objects are transferred via a short-term consolidation into passive VWM. Second, each time attentional priorities change (e.g. after an eye movement), a new competition episode is initiated. Third, if a phase 3 VWM process (e.g. short-term consolidation) is not finished, whereas a new episode is called, a protective maintenance process allows its completion. After a VWM object change, its protective maintenance process is followed by an encapsulation of the VWM object causing attentional resource costs in trailing competition episodes. Viewed from this perspective, a new explanation of key findings of the attentional blink will be offered. Finally, a new suggestion will be made as to how VWM items might interact with visual search processes. PMID:24018722

  14. How low can you go? Changing the resolution of novel complex objects in visual working memory according to task demands

    PubMed Central

    Allon, Ayala S.; Balaban, Halely; Luria, Roy

    2014-01-01

    In three experiments we manipulated the resolution of novel complex objects in visual working memory (WM) by changing task demands. Previous studies that investigated the trade-off between quantity and resolution in visual WM yielded mixed results for simple familiar stimuli. We used the contralateral delay activity as an electrophysiological marker to directly track the deployment of visual WM resources while participants preformed a change-detection task. Across three experiments we presented the same novel complex items but changed the task demands. In Experiment 1 we induced a medium resolution task by using change trials in which a random polygon changed to a different type of polygon and replicated previous findings showing that novel complex objects are represented with higher resolution relative to simple familiar objects. In Experiment 2 we induced a low resolution task that required distinguishing between polygons and other types of stimulus categories, but we failed in finding a corresponding decrease in the resolution of the represented item. Finally, in Experiment 3 we induced a high resolution task that required discriminating between highly similar polygons with somewhat different contours. This time, we observed an increase in the item’s resolution. Our findings indicate that the resolution for novel complex objects can be increased but not decreased according to task demands, suggesting that minimal resolution is required in order to maintain these items in visual WM. These findings support studies claiming that capacity and resolution in visual WM reflect different mechanisms. PMID:24734026

  15. Cognitive load effects on early visual perceptual processing.

    PubMed

    Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia

    2018-05-01

    Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.

  16. Detecting distortion: bridging visual and quantitative reasoning on similarity tasks

    NASA Astrophysics Data System (ADS)

    Cox, Dana C.; Lo, Jane-Jane

    2014-03-01

    This study is focused on identifying and describing the reasoning patterns of middle grade students when examining potentially similar figures. Described here is a framework that includes 11 strategies that students used during clinical interview to differentiate similar and non-similar figures. Two factors were found to influence the strategies students selected: the complexity of the figures being compared and the type of distortion present in nonsimilar pairings. Data from this study support the theory that distortions are identified as a dominant property of figures and that students use the presence and absence of distortion to visually decide if two figures are similar. Furthermore, this study shows that visual reasoning is not as primitive or nonconstructive as represented in earlier literature and supports students who are developing numeric reasoning strategies. This illuminates possible pathways students may take when advancing from using visual and additive reasoning strategies to using multiplicative proportional reasoning on similarity tasks. In particular, distortion detection is a visual activity that enables students to reflect upon and evaluate the validity and accuracy of differentiation and quantify perceived relationships leading to ratio. This study has implications for curriculum developers as well as future research.

  17. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast

  18. Do dyslexic individuals present a reduced visual attention span? Evidence from visual recognition tasks of non-verbal multi-character arrays.

    PubMed

    Yeari, Menahem; Isser, Michal; Schiff, Rachel

    2017-07-01

    A controversy has recently developed regarding the hypothesis that developmental dyslexia may be caused, in some cases, by a reduced visual attention span (VAS). To examine this hypothesis, independent of phonological abilities, researchers tested the ability of dyslexic participants to recognize arrays of unfamiliar visual characters. Employing this test, findings were rather equivocal: dyslexic participants exhibited poor performance in some studies but normal performance in others. The present study explored four methodological differences revealed between the two sets of studies that might underlie their conflicting results. Specifically, in two experiments we examined whether a VAS deficit is (a) specific to recognition of multi-character arrays as wholes rather than of individual characters within arrays, (b) specific to characters' position within arrays rather than to characters' identity, or revealed only under a higher attention load due to (c) low-discriminable characters, and/or (d) characters' short exposure. Furthermore, in this study we examined whether pure dyslexic participants who do not have attention disorder exhibit a reduced VAS. Although comorbidity of dyslexia and attention disorder is common and the ability to sustain attention for a long time plays a major rule in the visual recognition task, the presence of attention disorder was neither evaluated nor ruled out in previous studies. Findings did not reveal any differences between the performance of dyslexic and control participants on eight versions of the visual recognition task. These findings suggest that pure dyslexic individuals do not present a reduced visual attention span.

  19. The effect of visual taskload on critical flicker frequency (CFF) change during performance of a complex monitoring task.

    DOT National Transportation Integrated Search

    1985-10-01

    The present study examined the effect of differing levels of visual taskload on critical flicker frequency (CFF) change during performance of a complex monitoring task. The task employed was designed to functionally simulate the general task characte...

  20. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  1. Nintendo Wii Balance Board is sensitive to effects of visual tasks on standing sway in healthy elderly adults.

    PubMed

    Koslucher, Frank; Wade, Michael G; Nelson, Brent; Lim, Kelvin; Chen, Fu-Chen; Stoffregen, Thomas A

    2012-07-01

    Research has shown that the Nintendo Wii Balance Board (WBB) can reliably detect the quantitative kinematics of the center of pressure in stance. Previous studies used relatively coarse manipulations (1- vs. 2-leg stance, and eyes open vs. closed). We sought to determine whether the WBB could reliably detect postural changes associated with subtle variations in visual tasks. Healthy elderly adults stood on a WBB while performing one of two visual tasks. In the Inspection task, they maintained their gaze within the boundaries of a featureless target. In the Search task, they counted the occurrence of designated target letters within a block of text. Consistent with previous studies using traditional force plates, the positional variability of the center of pressure was reduced during performance of the Search task, relative to movement during performance of the Inspection task. Using detrended fluctuation analysis, a measure of movement dynamics, we found that COP trajectories were more predictable during performance of the Search task than during performance of the Inspection task. The results indicate that the WBB is sensitive to subtle variations in both the magnitude and dynamics of body sway that are related to variations in visual tasks engaged in during stance. The WBB is an inexpensive, reliable technology that can be used to evaluate subtle characteristics of body sway in large or widely dispersed samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Imitation and matching of meaningless gestures: distinct involvement from motor and visual imagery.

    PubMed

    Lesourd, Mathieu; Navarro, Jordan; Baumard, Josselin; Jarry, Christophe; Le Gall, Didier; Osiurak, François

    2017-05-01

    The aim of the present study was to understand the underlying cognitive processes of imitation and matching of meaningless gestures. Neuropsychological evidence obtained in brain damaged patients, has shown that distinct cognitive processes supported imitation and matching of meaningless gestures. Left-brain damaged (LBD) patients failed to imitate while right-brain damaged (RBD) patients failed to match meaningless gestures. Moreover, other studies with brain damaged patients showed that LBD patients were impaired in motor imagery while RBD patients were impaired in visual imagery. Thus, we hypothesize that imitation of meaningless gestures might rely on motor imagery, whereas matching of meaningless gestures might be based on visual imagery. In a first experiment, using a correlational design, we demonstrated that posture imitation relies on motor imagery but not on visual imagery (Experiment 1a) and that posture matching relies on visual imagery but not on motor imagery (Experiment 1b). In a second experiment, by manipulating directly the body posture of the participants, we demonstrated that such manipulation evokes a difference only in imitation task but not in matching task. In conclusion, the present study provides direct evidence that the way we imitate or we have to compare postures depends on motor imagery or visual imagery, respectively. Our results are discussed in the light of recent findings about underlying mechanisms of meaningful and meaningless gestures.

  3. Involving the public through participatory visual research methods

    PubMed Central

    Lorenz, Laura S.; Kolb, Bettina

    2009-01-01

    Abstract Objectives  To show how providing cameras to patients and community residents can be effective at involving the public in generating understanding of consumer, community, and health system problems and strengths. Background  Health‐care institutions and systems may seek to include consumer perspectives on health and health care yet be challenged to involve the most vulnerable sectors, be they persons with disabilities or persons with low socio‐economic status living in societies where a top‐down approach to policy is the norm. Methods  Drawing on study examples using photo‐elicitation and photovoice in Morocco and the United States, the authors explore issues of planning, data analysis, ethical concerns and action related to using participatory visual methods in different cultural and political contexts. Results  Visual data generated by consumers can be surprising and can identify health system problems and strengths omitted from data gathered using other means. Statistical data may convince policy makers of the need to address a problem. Participant visual data may in turn encourage policy maker attention and action. Conclusion  Health system decision making may be improved by having a broader range of data available. Participant‐generated visual data may support data gathered using traditional methods, or provide a reality check when compared with data generated by organizations, researchers and policy makers. The two study examples model innovative ways to surface health and health‐care issues as they relate to consumers’ real lives and engage vulnerable groups in systems change, even in contexts where expressing opinions might be seen as a risky thing to do. PMID:19754690

  4. Understanding Language, Hearing Status, and Visual-Spatial Skills.

    PubMed

    Marschark, Marc; Spencer, Linda J; Durkin, Andreana; Borgna, Georgianna; Convertino, Carol; Machmer, Elizabeth; Kronenberger, William G; Trani, Alexandra

    2015-10-01

    It is frequently assumed that deaf individuals have superior visual-spatial abilities relative to hearing peers and thus, in educational settings, they are often considered visual learners. There is some empirical evidence to support the former assumption, although it is inconsistent, and apparently none to support the latter. Three experiments examined visual-spatial and related cognitive abilities among deaf individuals who varied in their preferred language modality and use of cochlear implants (CIs) and hearing individuals who varied in their sign language skills. Sign language and spoken language assessments accompanied tasks involving visual-spatial processing, working memory, nonverbal logical reasoning, and executive function. Results were consistent with other recent studies indicating no generalized visual-spatial advantage for deaf individuals and suggested that their performance in that domain may be linked to the strength of their preferred language skills regardless of modality. Hearing individuals performed more strongly than deaf individuals on several visual-spatial and self-reported executive functioning measures, regardless of sign language skills or use of CIs. Findings are inconsistent with assumptions that deaf individuals are visual learners or are superior to hearing individuals across a broad range of visual-spatial tasks. Further, performance of deaf and hearing individuals on the same visual-spatial tasks was associated with differing cognitive abilities, suggesting that different cognitive processes may be involved in visual-spatial processing in these groups. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  6. How Chinese Semantics Capability Improves Interpretation in Visual Communication

    ERIC Educational Resources Information Center

    Cheng, Chu-Yu; Ou, Yang-Kun; Kin, Ching-Lung

    2017-01-01

    A visual representation involves delivering messages through visually communicated images. The study assumed that semantic recognition can affect visual interpretation ability, and the result showed that students graduating from a general high school achieve satisfactory results in semantic recognition and image interpretation tasks than students…

  7. Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.

    PubMed

    Smyth, Rachael E; Oram Cardy, Janis; Purcell, David

    2017-06-01

    This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.

  8. Effects of task-irrelevant grouping on visual selection in partial report.

    PubMed

    Lunau, Rasmus; Habekost, Thomas

    2017-07-01

    Perceptual grouping modulates performance in attention tasks such as partial report and change detection. Specifically, grouping of search items according to a task-relevant feature improves the efficiency of visual selection. However, the role of task-irrelevant feature grouping is not clearly understood. In the present study, we investigated whether grouping of targets by a task-irrelevant feature influences performance in a partial-report task. In this task, participants must report as many target letters as possible from a briefly presented circular display. The crucial manipulation concerned the color of the elements in these trials. In the sorted-color condition, the color of the display elements was arranged according to the selection criterion, and in the unsorted-color condition, colors were randomly assigned. The distractor cost was inferred by subtracting performance in partial-report trials from performance in a control condition that had no distractors in the display. Across five experiments, we manipulated trial order, selection criterion, and exposure duration, and found that attentional selectivity was improved in sorted-color trials when the exposure duration was 200 ms and the selection criterion was luminance. This effect was accompanied by impaired selectivity in unsorted-color trials. Overall, the results suggest that the benefit of task-irrelevant color grouping of targets is contingent on the processing locus of the selection criterion.

  9. Task-set inertia and memory-consolidation bottleneck in dual tasks.

    PubMed

    Koch, Iring; Rumiati, Raffaella I

    2006-11-01

    Three dual-task experiments examined the influence of processing a briefly presented visual object for deferred verbal report on performance in an unrelated auditory-manual reaction time (RT) task. RT was increased at short stimulus-onset asynchronies (SOAs) relative to long SOAs, showing that memory consolidation processes can produce a functional processing bottleneck in dual-task performance. In addition, the experiments manipulated the spatial compatibility of the orientation of the visual object and the side of the speeded manual response. This cross-task compatibility produced relative RT benefits only when the instruction for the visual task emphasized overlap at the level of response codes across the task sets (Experiment 1). However, once the effective task set was in place, it continued to produce cross-task compatibility effects even in single-task situations ("ignore" trials in Experiment 2) and when instructions for the visual task did not explicitly require spatial coding of object orientation (Experiment 3). Taken together, the data suggest a considerable degree of task-set inertia in dual-task performance, which is also reinforced by finding costs of switching task sequences (e.g., AC --> BC vs. BC --> BC) in Experiment 3.

  10. Multiple asynchronous stimulus- and task-dependent hierarchies (STDH) within the visual brain's parallel processing systems.

    PubMed

    Zeki, Semir

    2016-10-01

    Results from a variety of sources, some many years old, lead ineluctably to a re-appraisal of the twin strategies of hierarchical and parallel processing used by the brain to construct an image of the visual world. Contrary to common supposition, there are at least three 'feed-forward' anatomical hierarchies that reach the primary visual cortex (V1) and the specialized visual areas outside it, in parallel. These anatomical hierarchies do not conform to the temporal order with which visual signals reach the specialized visual areas through V1. Furthermore, neither the anatomical hierarchies nor the temporal order of activation through V1 predict the perceptual hierarchies. The latter shows that we see (and become aware of) different visual attributes at different times, with colour leading form (orientation) and directional visual motion, even though signals from fast-moving, high-contrast stimuli are among the earliest to reach the visual cortex (of area V5). Parallel processing, on the other hand, is much more ubiquitous than commonly supposed but is subject to a barely noticed but fundamental aspect of brain operations, namely that different parallel systems operate asynchronously with respect to each other and reach perceptual endpoints at different times. This re-assessment leads to the conclusion that the visual brain is constituted of multiple, parallel and asynchronously operating task- and stimulus-dependent hierarchies (STDH); which of these parallel anatomical hierarchies have temporal and perceptual precedence at any given moment is stimulus and task related, and dependent on the visual brain's ability to undertake multiple operations asynchronously. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Real-Time Strategy Video Game Experience and Visual Perceptual Learning.

    PubMed

    Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo

    2015-07-22

    Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience

  12. Imagery in the Congenitally Blind: How Visual Are Visual Images?

    ERIC Educational Resources Information Center

    Zimler, Jerome; Keenan, Janice M.

    1983-01-01

    Three experiments compared congenitally blind and sighted adults and children on paired-associate, free-recall, and imaging tasks presumed to involve visual imagery in memory. In all three, blind subjects' performances were remarkably similar to the sighted. Results challenge previous explanations of performance such as Paivio's (1971). (Author/RD)

  13. Adolescents' Visual Preference for Color over Form.

    ERIC Educational Resources Information Center

    Uba, Anselm

    1985-01-01

    Examines whether Nigerian adolescent girls are more likely to demonstrate superior performance in a task involving cultural differences in visual selective attention preference for color over form than boys are. Students (N=100) completed the Visual Selective Attention Color-Form Matching Experiment. Results confirmed the hypothesis. (BH)

  14. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  15. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  16. The Influence of Task Involvement on the Use of Learning Strategies.

    ERIC Educational Resources Information Center

    Nolen, Susan Bobbitt

    The relationship between goal orientation and the use of learning strategies and their effects on learning outcomes were investigated. The three goal orientations considered were: (1) task orientation, which involves learning for its own sake; (2) ego orientation, which involves a desire to perform better than others; and (3) work avoidance, which…

  17. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  18. Using task effort and pupil size to track covert shifts of visual attention independently of a pupillary light reflex.

    PubMed

    Brocher, Andreas; Harbecke, Raphael; Graf, Tim; Memmert, Daniel; Hüttermann, Stefanie

    2018-03-07

    We tested the link between pupil size and the task effort involved in covert shifts of visual attention. The goal of this study was to establish pupil size as a marker of attentional shifting in the absence of luminance manipulations. In three experiments, participants evaluated two stimuli that were presented peripherally, appearing equidistant from and on opposite sides of eye fixation. The angle between eye fixation and the peripherally presented target stimuli varied from 12.5° to 42.5°. The evaluation of more distant stimuli led to poorer performance than did the evaluation of more proximal stimuli throughout our study, confirming that the former required more effort than the latter. In addition, in Experiment 1 we found that pupil size increased with increasing angle and that this effect could not be reduced to the operation of low-level visual processes in the task. In Experiment 2 the pupil dilated more strongly overall when participants evaluated the target stimuli, which required shifts of attention, than when they merely reported on the target's presence versus absence. Both conditions yielded larger pupils for more distant than for more proximal stimuli, however. In Experiment 3, we manipulated task difficulty more directly, by changing the contrast at which the target stimuli were presented. We replicated the results from Experiment 1 only with the high-contrast stimuli. With stimuli of low contrast, ceiling effects in pupil size were observed. Our data show that the link between task effort and pupil size can be used to track the degree to which an observer covertly shifts attention to or detects stimuli in peripheral vision.

  19. Underestimating numerosity of items in visual search tasks.

    PubMed

    Cassenti, Daniel N; Kelley, Troy D; Ghirardelli, Thomas G

    2010-10-01

    Previous research on numerosity judgments addressed attended items, while the present research addresses underestimation for unattended items in visual search tasks. One potential cause of underestimation for unattended items is that estimates of quantity may depend on viewing a large portion of the display within foveal vision. Another theory follows from the occupancy model: estimating quantity of items in greater proximity to one another increases the likelihood of an underestimation error. Three experimental manipulations addressed aspects of underestimation for unattended items: the size of the distracters, the distance of the target from fixation, and whether items were clustered together. Results suggested that the underestimation effect for unattended items was best explained within a Gestalt grouping framework.

  20. Reverse alignment "mirror image" visualization as a laparoscopic training tool improves task performance.

    PubMed

    Dunnican, Ward J; Singh, T Paul; Ata, Ashar; Bendana, Emma E; Conlee, Thomas D; Dolce, Charles J; Ramakrishnan, Rakesh

    2010-06-01

    Reverse alignment (mirror image) visualization is a disconcerting situation occasionally faced during laparoscopic operations. This occurs when the camera faces back at the surgeon in the opposite direction from which the surgeon's body and instruments are facing. Most surgeons will attempt to optimize trocar and camera placement to avoid this situation. The authors' objective was to determine whether the intentional use of reverse alignment visualization during laparoscopic training would improve performance. A standard box trainer was configured for reverse alignment, and 34 medical students and junior surgical residents were randomized to train with either forward alignment (DIRECT) or reverse alignment (MIRROR) visualization. Enrollees were tested on both modalities before and after a 4-week structured training program specific to their modality. Student's t test was used to determine differences in task performance between the 2 groups. Twenty-one participants completed the study (10 DIRECT, 11 MIRROR). There were no significant differences in performance time between DIRECT or MIRROR participants during forward or reverse alignment initial testing. At final testing, DIRECT participants had improved times only in forward alignment performance; they demonstrated no significant improvement in reverse alignment performance. MIRROR participants had significant time improvement in both forward and reverse alignment performance at final testing. Reverse alignment imaging for laparoscopic training improves task performance for both reverse alignment and forward alignment tasks. This may be translated into improved performance in the operating room when faced with reverse alignment situations. Minimal lab training can account for drastic adaptation to this environment.

  1. Task Listing for Piano Technology for the Visually Impaired. Competency-Based Education.

    ERIC Educational Resources Information Center

    Virginia State Dept. of Education, Richmond. Div. of Vocational and Adult Education.

    This task listing was developed for use in Piano Technology, a course offered to visually impaired students at the Virginia School for the Deaf and Blind. The listing is intended to be used with the "Trade and Industrial Education Service Area Resource Guide" in the implementation of competency-based education for this population. The…

  2. Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision.

    PubMed

    Greene, Runyu L; Azari, David P; Hu, Yu Hen; Radwin, Robert G

    2017-11-01

    Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements. Copyright © 2017. Published by Elsevier Ltd.

  3. Proactive Interference Does Not Meaningfully Distort Visual Working Memory Capacity Estimates in the Canonical Change Detection Task

    PubMed Central

    Lin, Po-Han; Luck, Steven J.

    2012-01-01

    The change detection task has become a standard method for estimating the storage capacity of visual working memory. Most researchers assume that this task isolates the properties of an active short-term storage system that can be dissociated from long-term memory systems. However, long-term memory storage may influence performance on this task. In particular, memory traces from previous trials may create proactive interference that sometimes leads to errors, thereby reducing estimated capacity. Consequently, the capacity of visual working memory may be higher than is usually thought, and correlations between capacity and other measures of cognition may reflect individual differences in proactive interference rather than individual differences in the capacity of the short-term storage system. Indeed, previous research has shown that change detection performance can be influenced by proactive interference under some conditions. The purpose of the present study was to determine whether the canonical version of the change detection task – in which the to-be-remembered information consists of simple, briefly presented features – is influenced by proactive interference. Two experiments were conducted using methods that ordinarily produce substantial evidence of proactive interference, but no proactive interference was observed. Thus, the canonical version of the change detection task can be used to assess visual working memory capacity with no meaningful influence of proactive interference. PMID:22403556

  4. Proactive interference does not meaningfully distort visual working memory capacity estimates in the canonical change detection task.

    PubMed

    Lin, Po-Han; Luck, Steven J

    2012-01-01

    The change detection task has become a standard method for estimating the storage capacity of visual working memory. Most researchers assume that this task isolates the properties of an active short-term storage system that can be dissociated from long-term memory systems. However, long-term memory storage may influence performance on this task. In particular, memory traces from previous trials may create proactive interference that sometimes leads to errors, thereby reducing estimated capacity. Consequently, the capacity of visual working memory may be higher than is usually thought, and correlations between capacity and other measures of cognition may reflect individual differences in proactive interference rather than individual differences in the capacity of the short-term storage system. Indeed, previous research has shown that change detection performance can be influenced by proactive interference under some conditions. The purpose of the present study was to determine whether the canonical version of the change detection task - in which the to-be-remembered information consists of simple, briefly presented features - is influenced by proactive interference. Two experiments were conducted using methods that ordinarily produce substantial evidence of proactive interference, but no proactive interference was observed. Thus, the canonical version of the change detection task can be used to assess visual working memory capacity with no meaningful influence of proactive interference.

  5. Force-stabilizing synergies in motor tasks involving two actors

    PubMed Central

    Solnik, Stanislaw; Reschechtko, Sasha; Wu, Yen-Hsun; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2015-01-01

    We investigated the ability of two persons to produce force-stabilizing synergies in accurate multi-finger force production tasks under visual feedback on the total force only. The subjects produced a time profile of total force (the sum of two hand forces in one-person tasks and the sum of two subject forces in two-person tasks) consisting of a ramp-up, steady-state, and ramp-down segments; the steady-state segment was interrupted in the middle by a quick force pulse. Analyses of the structure of inter-trial finger force variance, motor equivalence, anticipatory synergy adjustments (ASAs), and the unintentional drift of the sharing pattern were performed. The two-person performance was characterized by a dramatically higher amount of inter-trial variance that did not affect total force, higher finger force deviations that did not affect total force (motor equivalent deviations), shorter ASAs, and larger drift of the sharing pattern. The rate of sharing pattern drift correlated with the initial disparity between the forces produced by the two persons (or two hands). The drift accelerated following the quick force pulse. Our observations show that sensory information on the task-specific performance variable is sufficient for the organization of performance-stabilizing synergies. They suggest, however, that two actors are less likely to follow a single optimization criterion as compared to a single performer. The presence of ASAs in the two-person condition might reflect fidgeting by one or both of the subjects. We discuss the characteristics of the drift in the sharing pattern as reflections of different characteristic times of motion within the sub-spaces that affect and do not affect salient performance variables. PMID:26105756

  6. Force-stabilizing synergies in motor tasks involving two actors.

    PubMed

    Solnik, Stanislaw; Reschechtko, Sasha; Wu, Yen-Hsun; Zatsiorsky, Vladimir M; Latash, Mark L

    2015-10-01

    We investigated the ability of two persons to produce force-stabilizing synergies in accurate multi-finger force production tasks under visual feedback on the total force only. The subjects produced a time profile of total force (the sum of two hand forces in one-person tasks and the sum of two subject forces in two-person tasks) consisting of a ramp-up, steady-state, and ramp-down segments; the steady-state segment was interrupted in the middle by a quick force pulse. Analyses of the structure of inter-trial finger force variance, motor equivalence, anticipatory synergy adjustments (ASAs), and the unintentional drift of the sharing pattern were performed. The two-person performance was characterized by a dramatically higher amount of inter-trial variance that did not affect total force, higher finger force deviations that did not affect total force (motor equivalent deviations), shorter ASAs, and larger drift of the sharing pattern. The rate of sharing pattern drift correlated with the initial disparity between the forces produced by the two persons (or two hands). The drift accelerated following the quick force pulse. Our observations show that sensory information on the task-specific performance variable is sufficient for the organization of performance-stabilizing synergies. They suggest, however, that two actors are less likely to follow a single optimization criterion as compared to a single performer. The presence of ASAs in the two-person condition might reflect fidgeting by one or both of the subjects. We discuss the characteristics of the drift in the sharing pattern as reflections of different characteristic times of motion within the subspaces that affect and do not affect salient performance variables.

  7. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind

  8. Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.

    PubMed

    Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C

    2014-05-01

    Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.

  9. Brain circuitries involved in emotional interference task in major depression disorder.

    PubMed

    Chechko, Natalia; Augustin, Marc; Zvyagintsev, Michael; Schneider, Frank; Habel, Ute; Kellermann, Thilo

    2013-07-01

    Emotional and non-emotional Stroop are frequently applied to study major depressive disorder (MDD). The versions of emotional Stroop used in previous studies were not, unlike the ones employed in the present study, based on semantic incongruence, making it difficult to compare the tasks. We used functional magnetic resonance imaging (fMRI) to study the neural and behavioral responses of 18 healthy subjects and 18 subjects with MDD to emotional and non-emotional word-face Stroop tasks based on semantic incompatibility between targets and distractors. In both groups, the distractors triggered significant amounts of interference conflict. A between-groups comparison revealed hypoactivation in MDD during emotional task in areas supporting conflict resolution (lateral prefrontal cortex, parietal and extrastriate cortices) paralleled by increased response in the right amygdala. Response in the amygdala, however, did not vary between conflicting and non-conflicting trials. While in the emotional (compared to non-emotional) task healthy controls showed considerably stronger involvement of networks related to conflict resolution, in patients, the processing differences between the two conflict types were negligible. The patients group was inhomogeneous in terms of medication and clinical characteristics. The number of female participants was higher, due to which gender effects could not be studied or excluded. Whilst healthy controls seemed able to adjust the involvement of the network supporting conflict resolution based on conflict demand, patients appeared to lack this capability. The reduced cortical involvement coupled with increased response of limbic structures might underlie the maladjustment vis-à-vis new demands in depressed mood. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Brief Report: Eye Movements during Visual Search Tasks Indicate Enhanced Stimulus Discriminability in Subjects with PDD

    ERIC Educational Resources Information Center

    Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace

    2008-01-01

    Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…

  11. The Attentional Boost Effect: Transient increases in attention to one task enhance performance in a second task.

    PubMed

    Swallow, Khena M; Jiang, Yuhong V

    2010-04-01

    Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). Copyright 2009 Elsevier B.V. All rights reserved.

  12. The Attentional Boost Effect: Transient Increases in Attention to One Task Enhance Performance in a Second Task

    PubMed Central

    Swallow, Khena M.; Jiang, Yuhong V.

    2009-01-01

    Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). PMID:20080232

  13. The relation of object naming and other visual speech production tasks: a large scale voxel-based morphometric study.

    PubMed

    Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia

    2015-01-01

    We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.

  14. Does Proactive Interference Play a Significant Role in Visual Working Memory Tasks?

    ERIC Educational Resources Information Center

    Makovski, Tal

    2016-01-01

    Visual working memory (VWM) is an online memory buffer that is typically assumed to be immune to source memory confusions. Accordingly, the few studies that have investigated the role of proactive interference (PI) in VWM tasks found only a modest PI effect at best. In contrast, a recent study has found a substantial PI effect in that performance…

  15. Attention during active visual tasks: counting, pointing, or simply looking

    PubMed Central

    Wilder, John D.; Schnitzer, Brian S.; Gersch, Timothy M.; Dosher, Barbara A.

    2009-01-01

    Visual attention and saccades are typically studied in artificial situations, with stimuli presented to the steadily fixating eye, or saccades made along specified paths. By contrast, in the real world saccadic patterns are constrained only by the demands of the motivating task. We studied attention during pauses between saccades made to perform 3 free-viewing tasks: counting dots, pointing to the same dots with a visible cursor, or simply looking at the dots using a freely-chosen path. Attention was assessed by the ability to identify the orientation of a briefly-presented Gabor probe. All primary tasks produced losses in identification performance, with counting producing the largest losses, followed by pointing and then looking-only. Looking-only resulted in a 37% increase in contrast thresholds in the orientation task. Counting produced more severe losses that were not overcome by increasing Gabor contrast. Detection or localization of the Gabor, unlike identification, were largely unaffected by any of the primary tasks. Taken together, these results show that attention is required to control saccades, even with freely-chosen paths, but the attentional demands of saccades are less than those attached to tasks such as counting, which have a significant cognitive load. Counting proved to be a highly demanding task that either exhausted momentary processing capacity (e.g., working memory or executive functions), or, alternatively, encouraged a strategy of filtering out all signals irrelevant to counting itself. The fact that the attentional demands of saccades (as well as those of detection/localization) are relatively modest makes it possible to continually adjust both the spatial and temporal pattern of saccades so as to re-allocate attentional resources as needed to handle the complex and multifaceted demands of real-world environments. PMID:18649913

  16. Retinotopic patterns of background connectivity between V1 and fronto-parietal cortex are modulated by task demands

    PubMed Central

    Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Visscher, Kristina M.

    2015-01-01

    Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions. PMID:26106320

  17. Medial Prefrontal Cortex Is Selectively Involved in Response Selection Using Visual Context in the Background

    ERIC Educational Resources Information Center

    Lee, Inah; Shin, Ji Yun

    2012-01-01

    The exact roles of the medial prefrontal cortex (mPFC) in conditional choice behavior are unknown and a visual contextual response selection task was used for examining the issue. Inactivation of the mPFC severely disrupted performance in the task. mPFC inactivations, however, did not disrupt the capability of perceptual discrimination for visual…

  18. Cultural differences in attention: Eye movement evidence from a comparative visual search task.

    PubMed

    Alotaibi, Albandri; Underwood, Geoffrey; Smith, Alastair D

    2017-10-01

    Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  20. Adaptation to recent conflict in the classical color-word Stroop-task mainly involves facilitation of processing of task-relevant information.

    PubMed

    Purmann, Sascha; Pollmann, Stefan

    2015-01-01

    To process information selectively and to continuously fine-tune selectivity of information processing are important abilities for successful goal-directed behavior. One phenomenon thought to represent this fine-tuning are conflict adaptation effects in interference tasks, i.e., reduction of interference after an incompatible trial and when incompatible trials are frequent. The neurocognitive mechanisms of these effects are currently only partly understood and results from brainimaging studies so far are mixed. In our study we validate and extend recent findings by examining adaption to recent conflict in the classical Stroop task using functional magnetic resonance imaging. Consistent with previous research we found increased activity in a fronto-parietal network comprising the medial prefrontal cortex, ventro-lateral prefrontal cortex, and posterior parietal cortex when contrasting incompatible with compatible trials. These areas have been associated with attentional processes and might reflect increased cognitive conflict and resolution thereof during incompatible trials. While carefully controlling for non-attentional sequential effects we found smaller Stroop interference after an incompatible trial (conflict adaptation effect). These behavioral conflict adaptation effects were accompanied by changes in activity in visual color-selective areas (V4, V4α), while there was no modulation by previous trial compatibility in a visual word-selective area (VWFA). Our results provide further evidence for the notion, that adaptation to recent conflict seems to be based mainly on enhancement of processing of the task-relevant information.

  1. Redefining the L2 Listening Construct within an Integrated Writing Task: Considering the Impacts of Visual-Cue Interpretation and Note-Taking

    ERIC Educational Resources Information Center

    Cubilo, Justin; Winke, Paula

    2013-01-01

    Researchers debate whether listening tasks should be supported by visuals. Most empirical research in this area has been conducted on the effects of visual support on listening comprehension tasks employing multiple-choice questions. The present study seeks to expand this research by investigating the effects of video listening passages (vs.…

  2. The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.

    PubMed

    Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli

    2009-11-18

    We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.

  3. Visual Perception and Reading: New Clues to Patterns of Dysfunction Across Multiple Visual Channels in Developmental Dyslexia.

    PubMed

    Pina Rodrigues, Ana; Rebola, José; Jorge, Helena; Ribeiro, Maria José; Pereira, Marcelino; van Asselen, Marieke; Castelo-Branco, Miguel

    2017-01-01

    The specificity of visual channel impairment in dyslexia has been the subject of much controversy. The purpose of this study was to determine if a differential pattern of impairment can be verified between visual channels in children with developmental dyslexia, and in particular, if the pattern of deficits is more conspicuous in tasks where the magnocellular-dorsal system recruitment prevails. Additionally, we also aimed at investigating the association between visual perception thresholds and reading. In the present case-control study, we compared perception thresholds of 33 children diagnosed with developmental dyslexia and 34 controls in a speed discrimination task, an achromatic contrast sensitivity task, and a chromatic contrast sensitivity task. Moreover, we addressed the correlation between the different perception thresholds and reading performance, as assessed by means of a standardized reading test (accuracy and fluency). Group comparisons were performed by the Mann-Whitney U test, and Spearman's rho was used as a measure of correlation. Results showed that, when compared to controls, children with dyslexia were more impaired in the speed discrimination task, followed by the achromatic contrast sensitivity task, with no impairment in the chromatic contrast sensitivity task. These results are also consistent with the magnocellular theory since the impairment profile of children with dyslexia in the visual threshold tasks reflected the amount of magnocellular-dorsal stream involvement. Moreover, both speed and achromatic thresholds were significantly correlated with reading performance, in terms of accuracy and fluency. Notably, chromatic contrast sensitivity thresholds did not correlate with any of the reading measures. Our evidence stands in favor of a differential visual channel deficit in children with developmental dyslexia and contributes to the debate on the pathophysiology of reading impairments.

  4. The effect of visual-motion time delays on pilot performance in a pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1976-01-01

    A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.

  5. Dual-task interference in visual working memory: A limitation in storage capacity but not in encoding or retrieval

    PubMed Central

    Fougnie, Daryl; Marois, René

    2009-01-01

    The concurrent maintenance of two visual working memory (VWM) arrays can lead to profound interference. It is unclear, however, whether these costs arise from limitations in VWM storage capacity (Fougnie & Marois, 2006), or from interference between the storage of one visual array and encoding or retrieval of another visual array (Cowan & Morey, 2007). Here, we show that encoding a VWM array does not interfere with maintenance of another VWM array unless the two displays exceed maintenance capacity (Experiments 1 and 2). Moreover, manipulating the extent to which encoding and maintenance can interfere with one another had no discernable effect on dual-task performance (Experiment 2). Finally, maintenance of a VWM array was not affected by retrieval of information from another VWM array (Experiment 3). Taken together, these findings demonstrate that dual-task interference between two concurrent VWM tasks is due to a capacity-limited store that is independent from encoding and retrieval processes. PMID:19933566

  6. Shifts in Gamma Phase–Amplitude Coupling Frequency from Theta to Alpha Over Posterior Cortex During Visual Tasks

    PubMed Central

    Voytek, Bradley; Canolty, Ryan T.; Shestyuk, Avgusta; Crone, Nathan E.; Parvizi, Josef; Knight, Robert T.

    2010-01-01

    The phase of ongoing theta (4–8 Hz) and alpha (8–12 Hz) electrophysiological oscillations is coupled to high gamma (80–150 Hz) amplitude, which suggests that low-frequency oscillations modulate local cortical activity. While this phase–amplitude coupling (PAC) has been demonstrated in a variety of tasks and cortical regions, it has not been shown whether task demands differentially affect the regional distribution of the preferred low-frequency coupling to high gamma. To address this issue we investigated multiple-rhythm theta/alpha to high gamma PAC in two subjects with implanted subdural electrocorticographic grids. We show that high gamma amplitude couples to the theta and alpha troughs and demonstrate that, during visual tasks, alpha/high gamma coupling preferentially increases in visual cortical regions. These results suggest that low-frequency phase to high-frequency amplitude coupling is modulated by behavioral task and may reflect a mechanism for selection between communicating neuronal networks. PMID:21060716

  7. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  8. Medications influencing central cholinergic pathways affect fixation stability, saccadic response time and associated eye movement dynamics during a temporally-cued visual reaction time task.

    PubMed

    Naicker, Preshanta; Anoopkumar-Dukie, Shailendra; Grant, Gary D; Modenese, Luca; Kavanagh, Justin J

    2017-02-01

    Anticholinergic medications largely exert their effects due to actions on the muscarinic receptor, which mediates the functions of acetylcholine in the peripheral and central nervous systems. In the central nervous system, acetylcholine plays an important role in the modulation of movement. This study investigated the effects of over-the-counter medications with varying degrees of central anticholinergic properties on fixation stability, saccadic response time and the dynamics associated with this eye movement during a temporally-cued visual reaction time task, in order to establish the significance of central cholinergic pathways in influencing eye movements during reaction time tasks. Twenty-two participants were recruited into the placebo-controlled, human double-blind, four-way crossover investigation. Eye tracking technology recorded eye movements while participants reacted to visual stimuli following temporally informative and uninformative cues. The task was performed pre-ingestion as well as 0.5 and 2 h post-ingestion of promethazine hydrochloride (strong centrally acting anticholinergic), hyoscine hydrobromide (moderate centrally acting anticholinergic), hyoscine butylbromide (anticholinergic devoid of central properties) and a placebo. Promethazine decreased fixation stability during the reaction time task. In addition, promethazine was the only drug to increase saccadic response time during temporally informative and uninformative cued trials, whereby effects on response time were more pronounced following temporally informative cues. Promethazine also decreased saccadic amplitude and increased saccadic duration during the temporally-cued reaction time task. Collectively, the results of the study highlight the significant role that central cholinergic pathways play in the control of eye movements during tasks that involve stimulus identification and motor responses following temporal cues.

  9. Visual Sensory and Visual-Cognitive Function and Rate of Crash and Near-Crash Involvement Among Older Drivers Using Naturalistic Driving Data

    PubMed Central

    Huisingh, Carrie; Levitan, Emily B.; Irvin, Marguerite R.; MacLennan, Paul; Wadley, Virginia; Owsley, Cynthia

    2017-01-01

    Purpose An innovative methodology using naturalistic driving data was used to examine the association between visual sensory and visual-cognitive function and rates of future crash or near-crash involvement among older drivers. Methods The Strategic Highway Research Program (SHRP2) Naturalistic Driving Study was used for this prospective analysis. The sample consisted of N = 659 drivers aged ≥70 years and study participation lasted 1 or 2 years for most participants. Distance and near visual acuity, contrast sensitivity, peripheral vision, visual processing speed, and visuospatial skills were assessed at baseline. Crash and near-crash involvement were based on video recordings and vehicle sensors. Poisson regression models were used to generate crude and adjusted rate ratios (RRs) and 95% confidence intervals, while accounting for person-miles of travel. Results After adjustment, severe impairment of the useful field of view (RR = 1.33) was associated with an increased rate of near-crash involvement. Crash, severe crash, and at-fault crash involvement were associated with impaired contrast sensitivity in the worse eye (RRs = 1.38, 1.54, and 1.44, respectively) and far peripheral field loss in both eyes (RRs = 1.74, 2.32, and 1.73, respectively). Conclusions Naturalistic driving data suggest that contrast sensitivity in the worse eye and far peripheral field loss in both eyes elevate the rates of crash involvement, and impaired visual processing speed elevates rates of near-crash involvement among older drivers. Naturalistic driving data may ultimately be critical for understanding the relationship between vision and driving safety. PMID:28605807

  10. Visual performance on detection tasks with double-targets of the same and different difficulty.

    PubMed

    Chan, Alan H S; Courtney, Alan J; Ma, C W

    2002-10-20

    This paper reports a study of measurement of horizontal visual sensitivity limits for 16 subjects in single-target and double-targets detection tasks. Two phases of tests were conducted in the double-targets task; targets of the same difficulty were tested in phase one while targets of different difficulty were tested in phase two. The range of sensitivity for the double-targets test was found to be smaller than that for single-target in both the same and different target difficulty cases. The presence of another target was found to affect performance to a marked degree. Interference effect of the difficult target on detection of the easy one was greater than that of the easy one on the detection of the difficult one. Performance decrement was noted when correct percentage detection was plotted against eccentricity of target in both the single-target and double-targets tests. Nevertheless, the non-significant correlation found between the performance for the two tasks demonstrated that it was impossible to predict quantitatively ability for detection of double targets from the data for single targets. This indicated probable problems in generalizing data for single target visual lobes to those for multiple targets. Also lobe area values obtained from measurements using a single-target task cannot be applied in a mathematical model for situations with multiple occurrences of targets.

  11. Autistic fluid intelligence: Increased reliance on visual functional connectivity with diminished modulation of coupling by task difficulty.

    PubMed

    Simard, Isabelle; Luck, David; Mottron, Laurent; Zeffiro, Thomas A; Soulières, Isabelle

    2015-01-01

    Different test types lead to different intelligence estimates in autism, as illustrated by the fact that autistic individuals obtain higher scores on the Raven's Progressive Matrices (RSPM) test than they do on the Wechsler IQ, in contrast to relatively similar performance on both tests in non-autistic individuals. However, the cerebral processes underlying these differences are not well understood. This study investigated whether activity in the fluid "reasoning" network, which includes frontal, parietal, temporal and occipital regions, is differently modulated by task complexity in autistic and non-autistic individuals during the RSPM. In this purpose, we used fMRI to study autistic and non-autistic participants solving the 60 RSPM problems focussing on regions and networks involved in reasoning complexity. As complexity increased, activity in the left superior occipital gyrus and the left middle occipital gyrus increased for autistic participants, whereas non-autistic participants showed increased activity in the left middle frontal gyrus and bilateral precuneus. Using psychophysiological interaction analyses (PPI), we then verified in which regions did functional connectivity increase as a function of reasoning complexity. PPI analyses revealed greater connectivity in autistic, compared to non-autistic participants, between the left inferior occipital gyrus and areas in the left superior frontal gyrus, right superior parietal lobe, right middle occipital gyrus and right inferior temporal gyrus. We also observed generally less modulation of the reasoning network as complexity increased in autistic participants. These results suggest that autistic individuals, when confronted with increasing task complexity, rely mainly on visuospatial processes when solving more complex matrices. In addition to the now well-established enhanced activity observed in visual areas in a range of tasks, these results suggest that the enhanced reliance on visual perception has a

  12. Autistic fluid intelligence: Increased reliance on visual functional connectivity with diminished modulation of coupling by task difficulty

    PubMed Central

    Simard, Isabelle; Luck, David; Mottron, Laurent; Zeffiro, Thomas A.; Soulières, Isabelle

    2015-01-01

    Different test types lead to different intelligence estimates in autism, as illustrated by the fact that autistic individuals obtain higher scores on the Raven's Progressive Matrices (RSPM) test than they do on the Wechsler IQ, in contrast to relatively similar performance on both tests in non-autistic individuals. However, the cerebral processes underlying these differences are not well understood. This study investigated whether activity in the fluid “reasoning” network, which includes frontal, parietal, temporal and occipital regions, is differently modulated by task complexity in autistic and non-autistic individuals during the RSPM. In this purpose, we used fMRI to study autistic and non-autistic participants solving the 60 RSPM problems focussing on regions and networks involved in reasoning complexity. As complexity increased, activity in the left superior occipital gyrus and the left middle occipital gyrus increased for autistic participants, whereas non-autistic participants showed increased activity in the left middle frontal gyrus and bilateral precuneus. Using psychophysiological interaction analyses (PPI), we then verified in which regions did functional connectivity increase as a function of reasoning complexity. PPI analyses revealed greater connectivity in autistic, compared to non-autistic participants, between the left inferior occipital gyrus and areas in the left superior frontal gyrus, right superior parietal lobe, right middle occipital gyrus and right inferior temporal gyrus. We also observed generally less modulation of the reasoning network as complexity increased in autistic participants. These results suggest that autistic individuals, when confronted with increasing task complexity, rely mainly on visuospatial processes when solving more complex matrices. In addition to the now well-established enhanced activity observed in visual areas in a range of tasks, these results suggest that the enhanced reliance on visual perception has a

  13. Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?

    ERIC Educational Resources Information Center

    Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.

    2005-01-01

    Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…

  14. Visual cortex activation in late-onset, Braille naive blind individuals: an fMRI study during semantic and phonological tasks with heard words.

    PubMed

    Burton, Harold; McLaren, Donald G

    2006-01-09

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example.

  15. Complex Visual Adaptations in Squid for Specific Tasks in Different Environments

    PubMed Central

    Chung, Wen-Sung; Marshall, N. Justin

    2017-01-01

    In common with their major competitors, the fish, squid are fast moving visual predators that live over a great range of depths in the ocean. Both squid and fish show a variety of adaptations with respect to optical properties, receptors and their underlying neural circuits, and these adaptations are often linked to the light conditions of their specific niche. In contrast to the extensive investigations of adaptive strategies in fish, vision in response to the varying quantity and quality of available light, our knowledge of visual adaptations in squid remains sparse. This study therefore undertook a comparative study of visual adaptations and capabilities in a number of squid species collected between 0 and 1,200 m. Histology, magnetic resonance imagery (MRI), and depth distributions were used to compare brains, eyes, and visual capabilities, revealing that the squid eye designs reflect the lifestyle and the versatility of neural architecture in its visual system. Tubular eyes and two types of regional retinal deformation were identified and these eye modifications are strongly associated with specific directional visual tasks. In addition, a combination of conventional and immuno-histology demonstrated a new form of a complex retina possessing two inner segment layers in two mid-water squid species which they rhythmically move across a broad range of depths (50–1,000 m). In contrast to their relatives with the regular single-layered inner segment retina live in the upper mesopelagic layer (50–400 m), the new form of retinal interneuronal layers suggests that the visual sensitivity of these two long distance vertical migrants may increase in response to dimmer environments. PMID:28286484

  16. Visuospatial anatomy comprehension: the role of spatial visualization ability and problem-solving strategies.

    PubMed

    Nguyen, Ngan; Mulla, Ali; Nelson, Andrew J; Wilson, Timothy D

    2014-01-01

    The present study explored the problem-solving strategies of high- and low-spatial visualization ability learners on a novel spatial anatomy task to determine whether differences in strategies contribute to differences in task performance. The results of this study provide further insights into the processing commonalities and differences among learners beyond the classification of spatial visualization ability alone, and help elucidate what, if anything, high- and low-spatial visualization ability learners do differently while solving spatial anatomy task problems. Forty-two students completed a standardized measure of spatial visualization ability, a novel spatial anatomy task, and a questionnaire involving personal self-analysis of the processes and strategies used while performing the spatial anatomy task. Strategy reports revealed that there were different ways students approached answering the spatial anatomy task problems. However, chi-square test analyses established that differences in problem-solving strategies did not contribute to differences in task performance. Therefore, underlying spatial visualization ability is the main source of variation in spatial anatomy task performance, irrespective of strategy. In addition to scoring higher and spending less time on the anatomy task, participants with high spatial visualization ability were also more accurate when solving the task problems. © 2013 American Association of Anatomists.

  17. Alpha-Band Rhythms in Visual Task Performance: Phase-Locking by Rhythmic Sensory Stimulation

    PubMed Central

    de Graaf, Tom A.; Gross, Joachim; Paterson, Gavin; Rusch, Tessa; Sack, Alexander T.; Thut, Gregor

    2013-01-01

    Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8–12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles. PMID:23555873

  18. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

  19. The neural substrates of deliberative decision making: contrasting effects of hippocampus lesions on performance and vicarious trial-and-error behavior in a spatial memory task and a visual discrimination task

    PubMed Central

    Bett, David; Allison, Elizabeth; Murdoch, Lauren H.; Kaefer, Karola; Wood, Emma R.; Dudchenko, Paul A.

    2012-01-01

    Vicarious trial-and-errors (VTEs) are back-and-forth movements of the head exhibited by rodents and other animals when faced with a decision. These behaviors have recently been associated with prospective sweeps of hippocampal place cell firing, and thus may reflect a rodent model of deliberative decision-making. The aim of the current study was to test whether the hippocampus is essential for VTEs in a spatial memory task and in a simple visual discrimination (VD) task. We found that lesions of the hippocampus with ibotenic acid produced a significant impairment in the accuracy of choices in a serial spatial reversal (SR) task. In terms of VTEs, whereas sham-lesioned animals engaged in more VTE behavior prior to identifying the location of the reward as opposed to repeated trials after it had been located, the lesioned animals failed to show this difference. In contrast, damage to the hippocampus had no effect on acquisition of a VD or on the VTEs seen in this task. For both lesion and sham-lesion animals, adding an additional choice to the VD increased the number of VTEs and decreased the accuracy of choices. Together, these results suggest that the hippocampus may be specifically involved in VTE behavior during spatial decision making. PMID:23115549

  20. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    PubMed

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  1. Task relevance modulates the cortical representation of feature conjunctions in the target template.

    PubMed

    Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan

    2017-07-03

    Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.

  2. Working memory load and distraction: dissociable effects of visual maintenance and cognitive control.

    PubMed

    Konstantinou, Nikos; Beal, Eleanor; King, Jean-Remi; Lavie, Nilli

    2014-10-01

    We establish a new dissociation between the roles of working memory (WM) cognitive control and visual maintenance in selective attention as measured by the efficiency of distractor rejection. The extent to which focused selective attention can prevent distraction has been shown to critically depend on the level and type of load involved in the task. High perceptual load that consumes perceptual capacity leads to reduced distractor processing, whereas high WM load that reduces WM ability to exert priority-based executive cognitive control over the task results in increased distractor processing (e.g., Lavie, Trends in Cognitive Sciences, 9(2), 75-82, 2005). WM also serves to maintain task-relevant visual representations, and such visual maintenance is known to recruit the same sensory cortices as those involved in perception (e.g., Pasternak & Greenlee, Nature Reviews Neuroscience, 6(2), 97-107, 2005). These findings led us to hypothesize that loading WM with visual maintenance would reduce visual capacity involved in perception, thus resulting in reduced distractor processing-similar to perceptual load and opposite to WM cognitive control load. Distractor processing was assessed in a response competition task, presented during the memory interval (or during encoding; Experiment 1a) of a WM task. Loading visual maintenance or encoding by increased set size for a memory sample of shapes, colors, and locations led to reduced distractor response competition effects. In contrast, loading WM cognitive control with verbal rehearsal of a random letter set led to increased distractor effects. These findings confirm load theory predictions and provide a novel functional distinction between the roles of WM maintenance and cognitive control in selective attention.

  3. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  4. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search.

    PubMed

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.

  5. Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task

    PubMed Central

    Chia, Jingyi S.; Burns, Stephen F.; Barrett, Laura A.; Chow, Jia Y.

    2017-01-01

    The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns), the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled (n = 12) and less skilled (n = 12) participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE) duration (indicative of superior sports performance) however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of badminton relating

  6. Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task.

    PubMed

    Chia, Jingyi S; Burns, Stephen F; Barrett, Laura A; Chow, Jia Y

    2017-01-01

    The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns), the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled ( n = 12) and less skilled ( n = 12) participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE) duration (indicative of superior sports performance) however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of badminton

  7. Visual cortex activation in late-onset, Braille naive blind individuals: An fMRI study during semantic and phonological tasks with heard words

    PubMed Central

    Burton, Harold; McLaren, Donald G.

    2013-01-01

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example. PMID:16198053

  8. Sex differences in retention after a visual or a spatial discrimination learning task in brood parasitic shiny cowbirds.

    PubMed

    Astié, Andrea A; Scardamaglia, Romina C; Muzio, Rubén N; Reboreda, Juan C

    2015-10-01

    Females of avian brood parasites, like the shiny cowbird (Molothrus bonariensis), locate host nests and on subsequent days return to parasitize them. This ecological pressure for remembering the precise location of multiple host nests may have selected for superior spatial memory abilities. We tested the hypothesis that shiny cowbirds show sex differences in spatial memory abilities associated with sex differences in host nest searching behavior and relative hippocampus volume. We evaluated sex differences during acquisition, reversal and retention after extinction in a visual and a spatial discrimination learning task. Contrary to our prediction, females did not outperform males in the spatial task in either the acquisition or the reversal phases. Similarly, there were no sex differences in either phase in the visual task. During extinction, in both tasks the retention of females was significantly higher than expected by chance up to 50 days after the last rewarded session (∼85-90% of the trials with correct responses), but the performance of males at that time did not differ than that expected by chance. This last result shows a long-term memory capacity of female shiny cowbirds, which were able to remember information learned using either spatial or visual cues after a long retention interval. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. An fMRI investigation into the effect of preceding stimuli during visual oddball tasks.

    PubMed

    Fajkus, Jiří; Mikl, Michal; Shaw, Daniel Joel; Brázdil, Milan

    2015-08-15

    This study investigates the modulatory effect of stimulus sequence on neural responses to novel stimuli. A group of 34 healthy volunteers underwent event-related functional magnetic resonance imaging while performing a three-stimulus visual oddball task, involving randomly presented frequent stimuli and two types of infrequent stimuli - targets and distractors. We developed a modified categorization of rare stimuli that incorporated the type of preceding rare stimulus, and analyzed the event-related functional data according to this sequence categorization; specifically, we explored hemodynamic response modulation associated with increasing rare-to-rare stimulus interval. For two consecutive targets, a modulation of brain function was evident throughout posterior midline and lateral temporal cortex, while responses to targets preceded by distractors were modulated in a widely distributed fronto-parietal system. As for distractors that follow targets, brain function was modulated throughout a set of posterior brain structures. For two successive distractors, however, no significant modulation was observed, which is consistent with previous studies and our primary hypothesis. The addition of the aforementioned technique extends the possibilities of conventional oddball task analysis, enabling researchers to explore the effects of the whole range of rare stimuli intervals. This methodology can be applied to study a wide range of associated cognitive mechanisms, such as decision making, expectancy and attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Visual search deficits in amblyopia.

    PubMed

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2018-04-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.

  11. Context matters: the structure of task goals affects accuracy in multiple-target visual search.

    PubMed

    Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R

    2014-05-01

    Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. Task-dependent individual differences in prefrontal connectivity.

    PubMed

    Biswal, Bharat B; Eldreth, Dana A; Motes, Michael A; Rypma, Bart

    2010-09-01

    Recent advances in neuroimaging have permitted testing of hypotheses regarding the neural bases of individual differences, but this burgeoning literature has been characterized by inconsistent results. To test the hypothesis that differences in task demands could contribute to between-study variability in brain-behavior relationships, we had participants perform 2 tasks that varied in the extent of cognitive involvement. We examined connectivity between brain regions during a low-demand vigilance task and a higher-demand digit-symbol visual search task using Granger causality analysis (GCA). Our results showed 1) Significant differences in numbers of frontoparietal connections between low- and high-demand tasks 2) that GCA can detect activity changes that correspond with task-demand changes, and 3) faster participants showed more vigilance-related activity than slower participants, but less visual-search activity. These results suggest that relatively low-demand cognitive performance depends on spontaneous bidirectionally fluctuating network activity, whereas high-demand performance depends on a limited, unidirectional network. The nature of brain-behavior relationships may vary depending on the extent of cognitive demand. High-demand network activity may reflect the extent to which individuals require top-down executive guidance of behavior for successful task performance. Low-demand network activity may reflect task- and performance monitoring that minimizes executive requirements for guidance of behavior.

  13. Task-Dependent Individual Differences in Prefrontal Connectivity

    PubMed Central

    Biswal, Bharat B.; Eldreth, Dana A.; Motes, Michael A.

    2010-01-01

    Recent advances in neuroimaging have permitted testing of hypotheses regarding the neural bases of individual differences, but this burgeoning literature has been characterized by inconsistent results. To test the hypothesis that differences in task demands could contribute to between-study variability in brain-behavior relationships, we had participants perform 2 tasks that varied in the extent of cognitive involvement. We examined connectivity between brain regions during a low-demand vigilance task and a higher-demand digit–symbol visual search task using Granger causality analysis (GCA). Our results showed 1) Significant differences in numbers of frontoparietal connections between low- and high-demand tasks 2) that GCA can detect activity changes that correspond with task-demand changes, and 3) faster participants showed more vigilance-related activity than slower participants, but less visual-search activity. These results suggest that relatively low-demand cognitive performance depends on spontaneous bidirectionally fluctuating network activity, whereas high-demand performance depends on a limited, unidirectional network. The nature of brain-behavior relationships may vary depending on the extent of cognitive demand. High-demand network activity may reflect the extent to which individuals require top-down executive guidance of behavior for successful task performance. Low-demand network activity may reflect task- and performance monitoring that minimizes executive requirements for guidance of behavior. PMID:20064942

  14. Beyond simple charts: Design of visualizations for big health data

    PubMed Central

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data’s utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data. PMID:28210416

  15. Beyond simple charts: Design of visualizations for big health data.

    PubMed

    Ola, Oluwakemi; Sedig, Kamran

    2016-01-01

    Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data's utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data.

  16. Flexible attention allocation to visual and auditory working memory tasks: manipulating reward induces a trade-off.

    PubMed

    Morey, Candice Coker; Cowan, Nelson; Morey, Richard D; Rouder, Jeffery N

    2011-02-01

    Prominent roles for general attention resources are posited in many models of working memory, but the manner in which these can be allocated differs between models or is not sufficiently specified. We varied the payoffs for correct responses in two temporally-overlapping recognition tasks, a visual array comparison task and a tone sequence comparison task. In the critical conditions, an increase in reward for one task corresponded to a decrease in reward for the concurrent task, but memory load remained constant. Our results show patterns of interference consistent with a trade-off between the tasks, suggesting that a shared resource can be flexibly divided, rather than only fully allotted to either of the tasks. Our findings support a role for a domain-general resource in models of working memory, and furthermore suggest that this resource is flexibly divisible.

  17. Secondary visual workload capability with primary visual and kinesthetic-tactual displays

    NASA Technical Reports Server (NTRS)

    Gilson, R. D.; Burke, M. W.; Jagacinski, R. J.

    1978-01-01

    Subjects performed a cross-adaptive tracking task with a visual secondary display and either a visual or a quickened kinesthetic-tactual (K-T) primary display. The quickened K-T display resulted in superior secondary task performance. Comparisons of secondary workload capability with integrated and separated visual displays indicated that the superiority of the quickened K-T display was not simply due to the elimination of visual scanning. When subjects did not have to perform a secondary task, there was no significant difference between visual and quickened K-T displays in performing a critical tracking task.

  18. Visual Attention Allocation Between Robotic Arm and Environmental Process Control: Validating the STOM Task Switching Model

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Vieanne, Alex; Clegg, Benjamin; Sebok, Angelia; Janes, Jessica

    2015-01-01

    Fifty six participants time shared a spacecraft environmental control system task with a realistic space robotic arm control task in either a manual or highly automated version. The former could suffer minor failures, whose diagnosis and repair were supported by a decision aid. At the end of the experiment this decision aid unexpectedly failed. We measured visual attention allocation and switching between the two tasks, in each of the eight conditions formed by manual-automated arm X expected-unexpected failure X monitoring- failure management. We also used our multi-attribute task switching model, based on task attributes of priority interest, difficulty and salience that were self-rated by participants, to predict allocation. An un-weighted model based on attributes of difficulty, interest and salience accounted for 96 percent of the task allocation variance across the 8 different conditions. Task difficulty served as an attractor, with more difficult tasks increasing the tendency to stay on task.

  19. Pathways to Identity: Aiding Law Enforcement in Identification Tasks With Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruce, Joseph R.; Scholtz, Jean; Hodges, Duncan

    The nature of identity has changed dramatically in recent years, and has grown in complexity. Identities are defined in multiple domains: biological and psychological elements strongly contribute, but also biographical and cyber elements are necessary to complete the picture. Law enforcement is beginning to adjust to these changes, recognizing its importance in criminal justice. The SuperIdentity project seeks to aid law enforcement officials in their identification tasks through research of techniques for discovering identity traits, generation of statistical models of identity and analysis of identity traits through visualization. We present use cases compiled through user interviews in multiple fields, includingmore » law enforcement, as well as the modeling and visualization tools design to aid in those use cases.« less

  20. Pathways to Identity. Using Visualization to Aid Law Enforcement in Identification Tasks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruce, Joseph R.; Scholtz, Jean; Hodges, Duncan

    The nature of identity has changed dramatically in recent years and has grown in complexity. Identities are defined in multiple domains: biological and psychological elements strongly contribute, but biographical and cyber elements also are necessary to complete the picture. Law enforcement is beginning to adjust to these changes, recognizing identity’s importance in criminal justice. The SuperIdentity project seeks to aid law enforcement officials in their identification tasks through research of techniques for discovering identity traits, generation of statistical models of identity and analysis of identity traits through visualization. We present use cases compiled through user interviews in multiple fields, includingmore » law enforcement, and describe the modeling and visualization tools design to aid in those use cases.« less

  1. Poor Performance on Serial Visual Tasks in Persons with Reading Disabilities: Impaired Working Memory?

    ERIC Educational Resources Information Center

    Ram-Tsur, Ronit; Faust, Miriam; Zivotofsky, Ari Z.

    2008-01-01

    The present study investigates the performance of persons with reading disabilities (PRD) on a variety of sequential visual-comparison tasks that have different working-memory requirements. In addition, mediating relationships between the sequential comparison process and attention and memory skills were looked for. Our findings suggest that PRD…

  2. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task.

    PubMed

    Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.

  3. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task

    PubMed Central

    Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536

  4. Does gravity influence the visual line bisection task?

    PubMed

    Drakul, A; Bockisch, C J; Tarnutzer, A A

    2016-08-01

    The visual line bisection task (LBT) is sensitive to perceptual biases of visuospatial attention, showing slight leftward (for horizontal lines) and upward (for vertical lines) errors in healthy subjects. It may be solved in an egocentric or allocentric reference frame, and there is no obvious need for graviceptive input. However, for other visual line adjustments, such as the subjective visual vertical, otolith input is integrated. We hypothesized that graviceptive input is incorporated when performing the LBT and predicted reduced accuracy and precision when roll-tilted. Twenty healthy right-handed subjects repetitively bisected Earth-horizontal and body-horizontal lines in darkness. Recordings were obtained before, during, and after roll-tilt (±45°, ±90°) for 5 min each. Additionally, bisections of Earth-vertical and oblique lines were obtained in 17 subjects. When roll-tilted ±90° ear-down, bisections of Earth-horizontal (i.e., body-vertical) lines were shifted toward the direction of the head (P < 0.001). However, after correction for vertical line-bisection errors when upright, shifts disappeared. Bisecting body-horizontal lines while roll-tilted did not cause any shifts. The precision of Earth-horizontal line bisections decreased (P ≤ 0.006) when roll-tilted, while no such changes were observed for body-horizontal lines. Regardless of the trial condition and paradigm, the scanning direction of the bisecting cursor (leftward vs. rightward) significantly (P ≤ 0.021) affected line bisections. Our findings reject our hypothesis and suggest that gravity does not modulate the LBT. Roll-tilt-dependent shifts are instead explained by the headward bias when bisecting lines oriented along a body-vertical axis. Increased variability when roll-tilted likely reflects larger variability when bisecting body-vertical than body-horizontal lines. Copyright © 2016 the American Physiological Society.

  5. Developmental Shifts in Children's Sensitivity to Visual Speech: A New Multimodal Picture-Word Task

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; Spence, Melanie J.; Tye-Murray, Nancy; Abdi, Herve

    2009-01-01

    This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously…

  6. Disturbed default mode network connectivity patterns in Alzheimer's disease associated with visual processing.

    PubMed

    Krajcovicova, Lenka; Mikl, Michal; Marecek, Radek; Rektorova, Irena

    2014-01-01

    Changes in connectivity of the posterior node of the default mode network (DMN) were studied when switching from baseline to a cognitive task using functional magnetic resonance imaging. In all, 15 patients with mild to moderate Alzheimer's disease (AD) and 18 age-, gender-, and education-matched healthy controls (HC) participated in the study. Psychophysiological interactions analysis was used to assess the specific alterations in the DMN connectivity (deactivation-based) due to psychological effects from the complex visual scene encoding task. In HC, we observed task-induced connectivity decreases between the posterior cingulate and middle temporal and occipital visual cortices. These findings imply successful involvement of the ventral visual pathway during the visual processing in our HC cohort. In AD, involvement of the areas engaged in the ventral visual pathway was observed only in a small volume of the right middle temporal gyrus. Additional connectivity changes (decreases) in AD were present between the posterior cingulate and superior temporal gyrus when switching from baseline to task condition. These changes are probably related to both disturbed visual processing and the DMN connectivity in AD and reflect deficits and compensatory mechanisms within the large scale brain networks in this patient population. Studying the DMN connectivity using psychophysiological interactions analysis may provide a sensitive tool for exploring early changes in AD and their dynamics during the disease progression.

  7. Brain oscillatory signatures of motor tasks

    PubMed Central

    Birbaumer, Niels

    2015-01-01

    Noninvasive brain-computer-interfaces (BCI) coupled with prosthetic devices were recently introduced in the rehabilitation of chronic stroke and other disorders of the motor system. These BCI systems and motor rehabilitation in general involve several motor tasks for training. This study investigates the neurophysiological bases of an EEG-oscillation-driven BCI combined with a neuroprosthetic device to define the specific oscillatory signature of the BCI task. Controlling movements of a hand robotic orthosis with motor imagery of the same movement generates sensorimotor rhythm oscillation changes and involves three elements of tasks also used in stroke motor rehabilitation: passive and active movement, motor imagery, and motor intention. We recorded EEG while nine healthy participants performed five different motor tasks consisting of closing and opening of the hand as follows: 1) motor imagery without any external feedback and without overt hand movement, 2) motor imagery that moves the orthosis proportional to the produced brain oscillation change with online proprioceptive and visual feedback of the hand moving through a neuroprosthetic device (BCI condition), 3) passive and 4) active movement of the hand with feedback (seeing and feeling the hand moving), and 5) rest. During the BCI condition, participants received contingent online feedback of the decrease of power of the sensorimotor rhythm, which induced orthosis movement and therefore proprioceptive and visual information from the moving hand. We analyzed brain activity during the five conditions using time-frequency domain bootstrap-based statistical comparisons and Morlet transforms. Activity during rest was used as a reference. Significant contralateral and ipsilateral event-related desynchronization of sensorimotor rhythm was present during all motor tasks, largest in contralateral-postcentral, medio-central, and ipsilateral-precentral areas identifying the ipsilateral precentral cortex as an integral

  8. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    PubMed Central

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530

  9. Mood & alcohol-related attentional biases: New considerations for gender differences and reliability of the visual-probe task.

    PubMed

    Emery, Noah N; Simons, Jeffrey S

    2015-11-01

    Alcohol-related attentional biases are positively associated with drinking history and may represent a mechanism by which alcohol use behavior is maintained over time. This study was designed to address two unresolved issues regarding alcohol-related attention biases. Specifically, this study tested whether acute changes in positive and negative mood increase attentional biases toward alcohol cues and whether coping and enhancement drinking motives moderate these effects. Participants were 100 college students aged 18-25, who drank alcohol at least once in the last 90 days. In a 2 × 3 mixed design, participants were randomized to one of three mood conditions (neutral, negative, or positive) and completed visual-probe tasks pre- and post-mood-induction. Attentional biases toward alcohol cues were significantly associated with alcohol consumption among men, but not women. Although the mood manipulation was highly successful, attentional biases did not vary as a function of mood condition and hypothesized moderating effects of drinking motives were not significant. The largely null findings of the experiment are discussed in light of the fact that the visual probe task had poor reliability. Issues related to the reliability of visual-probe task are discussed, as more research is needed to evaluate and improve the psychometrics of this method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Event-related potentials (ERPs) in ecstasy (MDMA) users during a visual oddball task.

    PubMed

    Mejias, S; Rossignol, M; Debatisse, D; Streel, E; Servais, L; Guérit, J M; Philippot, P; Campanella, S

    2005-07-01

    Ecstasy is the common name for a drug mainly containing a substance identified as 3,4-methylenedioxymethamphetamine (MDMA). It has become popular with participants in "raves", because it enhances energy, endurance and sexual arousal, together with the widespread belief that MDMA is a safe drug [Byard, R.W., Gilbert, J., James, R., Lokan, R.J., 1998. Amphetamine derivative fatalities in South Australia. Is "ecstasy" the culprit? Am. J. Forensic Med. Pathol. 19, 261-265]. However, it is suggested that this drug causes a neurotoxicity to the serotonergic system that could lead to permanent physical and cognitive problems. In order to investigate this issue, and during an ERP recording with 32 channels, we used a visual oddball design, in which subjects (14 MDMA abusers and 14 paired normal controls) saw frequent stimuli (neutral faces) while they had to detect as quickly as possible rare stimuli with happy or fearful expression. At a behavioral level, MDMA users imply longer latencies than normal controls to detect rare stimuli. At the neurophysiological level, ERP data suggest as main result that the N200 component, which is involved in attention orienting associated to the detection of stimulus novelty (e.g. [Campanella, S., Gaspard, C., Debatisse, D., Bruyer, R., Crommelinck, M., Guerit, J.M., 2002. Discrimination of emotional facial expression in a visual oddball task: an ERP study. Biol. Psychol. 59, 171-186]), shows shorter latencies for fearful rare stimuli (as compared to happy ones), but only for normal controls. This absence of delay was interpreted as an attentional deficit due to MDMA consumption.

  11. Correlates of male cohabiting partner's involvement in child-rearing tasks in low-income urban Black stepfamilies.

    PubMed

    Forehand, Rex; Parent, Justin; Golub, Andrew; Reid, Megan

    2014-06-01

    Cohabitation is a family structure experienced by many Black children. This study examines the link between family relationships (child relationship with mother and the cohabiting partner; parent and cohabiting partner relationship) and involvement of biologically unrelated male cohabiting partners (MCP) in child rearing. The participants were 121 low-income urban Black families consisting of a single mother, MCP, and an adolescent (56% female, M age = 13.7). Assessments were conducted individually with mothers, MCPs, and adolescents via measures administered by interview. MCPs were involved in both domains of child rearing assessed (daily child-related tasks and setting limits) and those identified as coparents by the mother were more involved in child-rearing tasks than those not identified as coparents. Using structural equation modeling (SEM), the mother-MCP relationship (both support and conflict) and the adolescent-MCP relationship were related to MCP's involvement in both domains of child rearing. The findings indicate that MCPs are actively involved in child rearing and family relationship variables are associated with their involvement in these tasks. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1977-01-01

    An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.

  13. Motivation and Cognition: The Impact of Ego and Task-Involvement on Levels of Processing.

    ERIC Educational Resources Information Center

    Golan, Shari; Graham, Sandra

    To study the effects of motivation on cognition, 55 fifth- and sixth-grade students were randomly assigned to 3 motivational treatment groups: (1) ego-involved (ability oriented); (2) task-involved (mastery oriented); and (3) control (no orientation). The ego-involvement treatment attempted to make subjects feel that their abilities on the tasks…

  14. Feature binding in visual short-term memory is unaffected by task-irrelevant changes of location, shape, and color.

    PubMed

    Logie, Robert H; Brockmole, James R; Jaswal, Snehlata

    2011-01-01

    Three experiments used a change detection paradigm across a range of study-test intervals to address the respective contributions of location, shape, and color to the formation of bindings of features in sensory memory and visual short-term memory (VSTM). In Experiment 1, location was designated task irrelevant and was randomized between study and test displays. The task was to detect changes in the bindings between shape and color. In Experiments 2 and 3, shape and color, respectively, were task irrelevant and randomized, with bindings tested between location and color (Experiment 2) and location and shape (Experiment 3). At shorter study-test intervals, randomizing location was most disruptive, followed by shape and then color. At longer intervals, randomizing any task-irrelevant feature had no impact on change detection for bindings between features, and location had no special role. Results suggest that location is crucial for initial perceptual binding but loses that special status once representations are formed in VSTM, which operates according to different principles, than do visual attention and perception.

  15. Matching cue size and task properties in exogenous attention.

    PubMed

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  16. Examining the Use of a Visual Analytics System for Sensemaking Tasks: Case Studies with Domain Experts.

    PubMed

    Kang, Youn-Ah; Stasko, J

    2012-12-01

    While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addressed. These findings inform design implications for future systems.

  17. Sustaining visual attention in the face of distraction: a novel gradual-onset continuous performance task.

    PubMed

    Rosenberg, Monica; Noonan, Sarah; DeGutis, Joseph; Esterman, Michael

    2013-04-01

    Sustained attention is a fundamental aspect of human cognition and has been widely studied in applied and clinical contexts. Despite a growing understanding of how attention varies throughout task performance, moment-to-moment fluctuations are often difficult to assess. In order to better characterize fluctuations in sustained visual attention, in the present study we employed a novel continuous performance task (CPT), the gradual-onset CPT (gradCPT). In the gradCPT, a central face stimulus gradually transitions between individuals at a constant rate (1,200 ms), and participants are instructed to respond to each male face but not to a rare target female face. In the distractor-present version, the background distractors consist of scene images, and in the distractor-absent condition, of phase-scrambled scene images. The results confirmed that the gradCPT taxes sustained attention, as vigilance decrements were observed over the task's 12-min duration: Participants made more commission errors and showed increasingly variable response latencies (RTs) over time. Participants' attentional states also fluctuated from moment to moment, with periods of higher RT variability being associated with increased likelihood of errors and greater speed-accuracy trade-offs. In addition, task performance was related to self-reported mindfulness and the propensity for attention lapses in everyday life. The gradCPT is a useful tool for studying both low- and high-frequency fluctuations in sustained visual attention and is sensitive to individual differences in attentional ability.

  18. What top-down task sets do for us: an ERP study on the benefits of advance preparation in visual search.

    PubMed

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-12-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features. Visual search arrays contained two different color singleton digits, and participants had to select one of these as target and report its parity. Target color was either known in advance (fixed color task) or had to be selected anew on each trial (free color-choice task). ERP correlates of spatially selective attentional target selection (N2pc) and working memory processing (SPCN) demonstrated rapid target selection and efficient exclusion of color singleton distractors from focal attention and working memory in the fixed color task. In the free color-choice task, spatially selective processing also emerged rapidly, but selection efficiency was reduced, with nontarget singleton digits capturing attention and gaining access to working memory. Results demonstrate the benefits of top-down task sets: Feature-specific advance preparation accelerates target selection, rapidly resolves attentional competition, and prevents irrelevant events from attracting attention and entering working memory.

  19. Flexible Visual Processing in Young Adults with Autism: The Effects of Implicit Learning on a Global-Local Task

    ERIC Educational Resources Information Center

    Hayward, Dana A.; Shore, David I.; Ristic, Jelena; Kovshoff, Hanna; Iarocci, Grace; Mottron, Laurent; Burack, Jacob A.

    2012-01-01

    We utilized a hierarchical figures task to determine the default level of perceptual processing and the flexibility of visual processing in a group of high-functioning young adults with autism (n = 12) and a typically developing young adults, matched by chronological age and IQ (n = 12). In one task, participants attended to one level of the…

  20. Neural Correlates of Changes in a Visual Search Task due to Cognitive Training in Seniors

    PubMed Central

    Wild-Wall, Nele; Falkenstein, Michael; Gajewski, Patrick D.

    2012-01-01

    This study aimed to elucidate the underlying neural sources of near transfer after a multidomain cognitive training in older participants in a visual search task. Participants were randomly assigned to a social control, a no-contact control and a training group, receiving a 4-month paper-pencil and PC-based trainer guided cognitive intervention. All participants were tested in a before and after session with a conjunction visual search task. Performance and event-related potentials (ERPs) suggest that the cognitive training improved feature processing of the stimuli which was expressed in an increased rate of target detection compared to the control groups. This was paralleled by enhanced amplitudes of the frontal P2 in the ERP and by higher activation in lingual and parahippocampal brain areas which are discussed to support visual feature processing. Enhanced N1 and N2 potentials in the ERP for nontarget stimuli after cognitive training additionally suggest improved attention and subsequent processing of arrays which were not immediately recognized as targets. Possible test repetition effects were confined to processes of stimulus categorisation as suggested by the P3b potential. The results show neurocognitive plasticity in aging after a broad cognitive training and allow pinpointing the functional loci of effects induced by cognitive training. PMID:23029625

  1. Subjective Estimation of Task Time and Task Difficulty of Simple Movement Tasks.

    PubMed

    Chan, Alan H S; Hoffmann, Errol R

    2017-01-01

    It has been demonstrated in previous work that the same neural structures are used for both imagined and real movements. To provide a strong test of the similarity of imagined and actual movement times, 4 simple movement tasks were used to determine the relationship between estimated task time and actual movement time. The tasks were single-component visually controlled movements, 2-component visually controlled, low index of difficulty (ID) moves and pin-to-hole transfer movements. For each task there was good correspondence between the mean estimated times and actual movement times. In all cases, the same factors determined the actual and estimated movement times: the amplitudes of movement and the IDs of the component movements, however the contribution of each of these variables differed for the imagined and real tasks. Generally, the standard deviations of the estimated times were linearly related to the estimated time values. Overall, the data provide strong evidence for the same neural structures being used for both imagined and actual movements.

  2. Exploring the Impact of Target Eccentricity and Task Difficulty on Covert Visual Spatial Attention and Its Implications for Brain Computer Interfacing

    PubMed Central

    Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan

    2013-01-01

    Objective Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. Approach We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Main Results Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Significance Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research. PMID:24312477

  3. Exploring the impact of target eccentricity and task difficulty on covert visual spatial attention and its implications for brain computer interfacing.

    PubMed

    Roijendijk, Linsey; Farquhar, Jason; van Gerven, Marcel; Jensen, Ole; Gielen, Stan

    2013-01-01

    Covert visual spatial attention is a relatively new task used in brain computer interfaces (BCIs) and little is known about the characteristics which may affect performance in BCI tasks. We investigated whether eccentricity and task difficulty affect alpha lateralization and BCI performance. We conducted a magnetoencephalography study with 14 participants who performed a covert orientation discrimination task at an easy or difficult stimulus contrast at either a near (3.5°) or far (7°) eccentricity. Task difficulty was manipulated block wise and subjects were aware of the difficulty level of each block. Grand average analyses revealed a significantly larger hemispheric lateralization of posterior alpha power in the difficult condition than in the easy condition, while surprisingly no difference was found for eccentricity. The difference between task difficulty levels was significant in the interval between 1.85 s and 2.25 s after cue onset and originated from a stronger decrease in the contralateral hemisphere. No significant effect of eccentricity was found. Additionally, single-trial classification analysis revealed a higher classification rate in the difficult (65.9%) than in the easy task condition (61.1%). No effect of eccentricity was found in classification rate. Our results indicate that manipulating the difficulty of a task gives rise to variations in alpha lateralization and that using a more difficult task improves covert visual spatial attention BCI performance. The variations in the alpha lateralization could be caused by different factors such as an increased mental effort or a higher visual attentional demand. Further research is necessary to discriminate between them. We did not discover any effect of eccentricity in contrast to results of previous research.

  4. Open angle glaucoma effects on preattentive visual search efficiency for flicker, motion displacement and orientation pop-out tasks.

    PubMed

    Loughman, James; Davison, Peter; Flitcroft, Ian

    2007-11-01

    Preattentive visual search (PAVS) describes rapid and efficient retinal and neural processing capable of immediate target detection in the visual field. Damage to the nerve fibre layer or visual pathway might reduce the efficiency with which the visual system performs such analysis. The purpose of this study was to test the hypothesis that patients with glaucoma are impaired on parallel search tasks, and that this would serve to distinguish glaucoma in early cases. Three groups of observers (glaucoma patients, suspect and normal individuals) were examined, using computer-generated flicker, orientation, and vertical motion displacement targets to assess PAVS efficiency. The task required rapid and accurate localisation of a singularity embedded in a field of 119 homogeneous distractors on either the left or right-hand side of a computer monitor. All subjects also completed a choice reaction time (CRT) task. Independent sample T tests revealed PAVS efficiency to be significantly impaired in the glaucoma group compared with both normal and suspect individuals. Performance was impaired in all types of glaucoma tested. Analysis between normal and suspect individuals revealed a significant difference only for motion displacement response times. Similar analysis using a PAVS/CRT index confirmed the glaucoma findings but also showed statistically significant differences between suspect and normal individuals across all target types. A test of PAVS efficiency appears capable of differentiating early glaucoma from both normal and suspect cases. Analysis incorporating a PAVS/CRT index enhances the diagnostic capacity to differentiate normal from suspect cases.

  5. Action Recognition and Movement Direction Discrimination Tasks Are Associated with Different Adaptation Patterns

    PubMed Central

    de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.

    2016-01-01

    The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633

  6. Functional relationships between the hippocampus and dorsomedial striatum in learning a visual scene-based memory task in rats.

    PubMed

    Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah

    2014-11-19

    The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.

  7. Does constraining memory maintenance reduce visual search efficiency?

    PubMed

    Buttaccio, Daniel R; Lange, Nicholas D; Thomas, Rick P; Dougherty, Michael R

    2018-03-01

    We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.

  8. Task-irrelevant distractors in the delay period interfere selectively with visual short-term memory for spatial locations.

    PubMed

    Marini, Francesco; Scott, Jerry; Aron, Adam R; Ester, Edward F

    2017-07-01

    Visual short-term memory (VSTM) enables the representation of information in a readily accessible state. VSTM is typically conceptualized as a form of "active" storage that is resistant to interference or disruption, yet several recent studies have shown that under some circumstances task-irrelevant distractors may indeed disrupt performance. Here, we investigated how task-irrelevant visual distractors affected VSTM by asking whether distractors induce a general loss of remembered information or selectively interfere with memory representations. In a VSTM task, participants recalled the spatial location of a target visual stimulus after a delay in which distractors were presented on 75% of trials. Notably, the distractor's eccentricity always matched the eccentricity of the target, while in the critical conditions the distractor's angular position was shifted either clockwise or counterclockwise relative to the target. We then computed estimates of recall error for both eccentricity and polar angle. A general interference model would predict an effect of distractors on both polar angle and eccentricity errors, while a selective interference model would predict effects of distractors on angle but not on eccentricity errors. Results showed that for stimulus angle there was an increase in the magnitude and variability of recall errors. However, distractors had no effect on estimates of stimulus eccentricity. Our results suggest that distractors selectively interfere with VSTM for spatial locations.

  9. A Neural Network Approach to fMRI Binocular Visual Rivalry Task Analysis

    PubMed Central

    Bertolino, Nicola; Ferraro, Stefania; Nigri, Anna; Bruzzone, Maria Grazia; Ghielmetti, Francesco; Leonardi, Matilde; Agostino Parati, Eugenio; Grazia Bruzzone, Maria; Franceschetti, Silvana; Caldiroli, Dario; Sattin, Davide; Giovannetti, Ambra; Pagani, Marco; Covelli, Venusia; Ciaraffa, Francesca; Vela Gomez, Jesus; Reggiori, Barbara; Ferraro, Stefania; Nigri, Anna; D'Incerti, Ludovico; Minati, Ludovico; Andronache, Adrian; Rosazza, Cristina; Fazio, Patrik; Rossi, Davide; Varotto, Giulia; Panzica, Ferruccio; Benti, Riccardo; Marotta, Giorgio; Molteni, Franco

    2014-01-01

    The purpose of this study was to investigate whether artificial neural networks (ANN) are able to decode participants’ conscious experience perception from brain activity alone, using complex and ecological stimuli. To reach the aim we conducted pattern recognition data analysis on fMRI data acquired during the execution of a binocular visual rivalry paradigm (BR). Twelve healthy participants were submitted to fMRI during the execution of a binocular non-rivalry (BNR) and a BR paradigm in which two classes of stimuli (faces and houses) were presented. During the binocular rivalry paradigm, behavioral responses related to the switching between consciously perceived stimuli were also collected. First, we used the BNR paradigm as a functional localizer to identify the brain areas involved the processing of the stimuli. Second, we trained the ANN on the BNR fMRI data restricted to these regions of interest. Third, we applied the trained ANN to the BR data as a ‘brain reading’ tool to discriminate the pattern of neural activity between the two stimuli. Fourth, we verified the consistency of the ANN outputs with the collected behavioral indicators of which stimulus was consciously perceived by the participants. Our main results showed that the trained ANN was able to generalize across the two different tasks (i.e. BNR and BR) and to identify with high accuracy the cognitive state of the participants (i.e. which stimulus was consciously perceived) during the BR condition. The behavioral response, employed as control parameter, was compared with the network output and a statistically significant percentage of correspondences (p-value <0.05) were obtained for all subjects. In conclusion the present study provides a method based on multivariate pattern analysis to investigate the neural basis of visual consciousness during the BR phenomenon when behavioral indicators lack or are inconsistent, like in disorders of consciousness or sedated patients. PMID:25121595

  10. Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?

    PubMed

    Wahn, Basil; König, Peter

    2017-01-01

    Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain's costly resource expenditures and simultaneously maximizes capability to process currently relevant information.

  11. Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.

    ERIC Educational Resources Information Center

    Chun, Marvin M.; Jiang, Yuhong

    1998-01-01

    Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)

  12. Impaired Activation of Visual Attention Network for Motion Salience Is Accompanied by Reduced Functional Connectivity between Frontal Eye Fields and Visual Cortex in Strabismic Amblyopia

    PubMed Central

    Wang, Hao; Crewther, Sheila G.; Liang, Minglong; Laycock, Robin; Yu, Tao; Alexander, Bonnie; Crewther, David P.; Wang, Jian; Yin, Zhengqin

    2017-01-01

    Strabismic amblyopia is now acknowledged to be more than a simple loss of acuity and to involve alterations in visually driven attention, though whether this applies to both stimulus-driven and goal-directed attention has not been explored. Hence we investigated monocular threshold performance during a motion salience-driven attention task involving detection of a coherent dot motion target in one of four quadrants in adult controls and those with strabismic amblyopia. Psychophysical motion thresholds were impaired for the strabismic amblyopic eye, requiring longer inspection time and consequently slower target speed for detection compared to the fellow eye or control eyes. We compared fMRI activation and functional connectivity between four ROIs of the occipital-parieto-frontal visual attention network [primary visual cortex (V1), motion sensitive area V5, intraparietal sulcus (IPS) and frontal eye fields (FEF)], during a suprathreshold version of the motion-driven attention task, and also a simple goal-directed task, requiring voluntary saccades to targets randomly appearing along a horizontal line. Activation was compared when viewed monocularly by controls and the amblyopic and its fellow eye in strabismics. BOLD activation was weaker in IPS, FEF and V5 for both tasks when viewing through the amblyopic eye compared to viewing through the fellow eye or control participants' non-dominant eye. No difference in V1 activation was seen between the amblyopic and fellow eye, nor between the two eyes of control participants during the motion salience task, though V1 activation was significantly less through the amblyopic eye than through the fellow eye and control group non-dominant eye viewing during the voluntary saccade task. Functional correlations of ROIs within the attention network were impaired through the amblyopic eye during the motion salience task, whereas this was not the case during the voluntary saccade task. Specifically, FEF showed reduced functional

  13. Unintentional Activation of Translation Equivalents in Bilinguals Leads to Attention Capture in a Cross-Modal Visual Task

    PubMed Central

    Singh, Niharika; Mishra, Ramesh Kumar

    2015-01-01

    Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain. PMID:25775184

  14. Wavefront-Guided Versus Wavefront-Optimized Photorefractive Keratectomy: Visual and Military Task Performance.

    PubMed

    Ryan, Denise S; Sia, Rose K; Stutzman, Richard D; Pasternak, Joseph F; Howard, Robin S; Howell, Christopher L; Maurer, Tana; Torres, Mark F; Bower, Kraig S

    2017-01-01

    To compare visual performance, marksmanship performance, and threshold target identification following wavefront-guided (WFG) versus wavefront-optimized (WFO) photorefractive keratectomy (PRK). In this prospective, randomized clinical trial, active duty U.S. military Soldiers, age 21 or over, electing to undergo PRK were randomized to undergo WFG (n = 27) or WFO (n = 27) PRK for myopia or myopic astigmatism. Binocular visual performance was assessed preoperatively and 1, 3, and 6 months postoperatively: Super Vision Test high contrast, Super Vision Test contrast sensitivity (CS), and 25% contrast acuity with night vision goggle filter. CS function was generated testing at five spatial frequencies. Marksmanship performance in low light conditions was evaluated in a firing tunnel. Target detection and identification performance was tested for probability of identification of varying target sets and probability of detection of humans in cluttered environments. Visual performance, CS function, marksmanship, and threshold target identification demonstrated no statistically significant differences over time between the two treatments. Exploratory regression analysis of firing range tasks at 6 months showed no significant differences or correlations between procedures. Regression analysis of vehicle and handheld probability of identification showed a significant association with pretreatment performance. Both WFG and WFO PRK results translate to excellent and comparable visual and military performance. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  15. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    ERIC Educational Resources Information Center

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  16. Executive Function Is Necessary for Perspective Selection, Not Level-1 Visual Perspective Calculation: Evidence from a Dual-Task Study of Adults

    ERIC Educational Resources Information Center

    Qureshi, Adam W.; Apperly, Ian A.; Samson, Dana

    2010-01-01

    Previous research suggests that perspective-taking and other "theory of mind" processes may be cognitively demanding for adult participants, and may be disrupted by concurrent performance of a secondary task. In the current study, a Level-1 visual perspective task was administered to 32 adults using a dual-task paradigm in which the secondary task…

  17. Visual attention in a complex search task differs between honeybees and bumblebees.

    PubMed

    Morawetz, Linde; Spaethe, Johannes

    2012-07-15

    Mechanisms of spatial attention are used when the amount of gathered information exceeds processing capacity. Such mechanisms have been proposed in bees, but have not yet been experimentally demonstrated. We provide evidence that selective attention influences the foraging performance of two social bee species, the honeybee Apis mellifera and the bumblebee Bombus terrestris. Visual search tasks, originally developed for application in human psychology, were adapted for behavioural experiments on bees. We examined the impact of distracting visual information on search performance, which we measured as error rate and decision time. We found that bumblebees were significantly less affected by distracting objects than honeybees. Based on the results, we conclude that the search mechanism in honeybees is serial like, whereas in bumblebees it shows the characteristics of a restricted parallel-like search. Furthermore, the bees differed in their strategy to solve the speed-accuracy trade-off. Whereas bumblebees displayed slow but correct decision-making, honeybees exhibited fast and inaccurate decision-making. We propose two neuronal mechanisms of visual information processing that account for the different responses between honeybees and bumblebees, and we correlate species-specific features of the search behaviour to differences in habitat and life history.

  18. Postural Responses to a Suprapostural Visual Task among Children with and without Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    Chen, F. C.; Tsai, C. L.; Stoffregen, T. A.; Wade, M. G.

    2011-01-01

    We sought to determine the effects of varying the perceptual demands of a suprapostural visual task on the postural activity of children with developmental coordination disorder (DCD), and typically developing children (TDC). Sixty-four (32 per group) children aged between 9 and 10 years participated. In a within-participants design, each child…

  19. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.

    PubMed

    Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart

    2017-01-01

    Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built

  20. EFFECTS OF VERBAL REINFORCEMENT ON INTELLECTIVE TASK PERFORMANCE AS A FUNCTION OF SELF-ESTEEM AND TASK-INVOLVEMENT. FINAL REPORT.

    ERIC Educational Resources Information Center

    FISCHER, EDWARD H.; HERSCHBERGER, AUSTIN C.

    USE OF THE VERBAL REINFORCEMENT TECHNIQUE (VRT) IN DEVELOPMENTAL, PERSONALITY, AND SOCIALIZATION STUDIES OFTEN RESTS ON TENUOUS AND UNTESTED ASSUMPTIONS. THIS STUDY EXAMINED FIVE VARIABLES WHICH HYPOTHETICALLY RELATE TO PERFORMANCE UNDER REINFORCEMENT--SELF-ESTEEM OF S. TASK-INVOLVEMENT, EXPERIMENTER, ORDINAL POSITION, AND FAMILY SIZE. THE METHOD…

  1. Screening for Impaired Visual Acuity in Older Adults: US Preventive Services Task Force Recommendation Statement.

    PubMed

    Siu, Albert L; Bibbins-Domingo, Kirsten; Grossman, David C; Baumann, Linda Ciofu; Davidson, Karina W; Ebell, Mark; García, Francisco A R; Gillman, Matthew; Herzstein, Jessica; Kemper, Alex R; Krist, Alex H; Kurth, Ann E; Owens, Douglas K; Phillips, William R; Phipps, Maureen G; Pignone, Michael P

    2016-03-01

    Update of the US Preventive Services Task Force (USPSTF) recommendation on screening for impaired visual acuity in older adults. The USPSTF reviewed the evidence on screening for visual acuity impairment associated with uncorrected refractive error, cataracts, and age-related macular degeneration among adults 65 years or older in the primary care setting; the benefits and harms of screening; the accuracy of screening; and the benefits and harms of treatment of early vision impairment due to uncorrected refractive error, cataracts, and age-related macular degeneration. This recommendation applies to asymptomatic adults 65 years or older who do not present to their primary care clinician with vision problems. The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of screening for impaired visual acuity in older adults. (I statement).

  2. Processing of pitch and location in human auditory cortex during visual and auditory tasks.

    PubMed

    Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu

    2015-01-01

    The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed

  3. Processing of pitch and location in human auditory cortex during visual and auditory tasks

    PubMed Central

    Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu

    2015-01-01

    The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed

  4. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  5. Fractal fluctuations in gaze speed visual search.

    PubMed

    Stephen, Damian G; Anastas, Jason

    2011-04-01

    Visual search involves a subtle coordination of visual memory and lower-order perceptual mechanisms. Specifically, the fluctuations in gaze may provide support for visual search above and beyond what may be attributed to memory. Prior research indicates that gaze during search exhibits fractal fluctuations, which allow for a wide sampling of the field of view. Fractal fluctuations constitute a case of fast diffusion that may provide an advantage in exploration. We present reanalyses of eye-tracking data collected by Stephen and Mirman (Cognition, 115, 154-165, 2010) for single-feature and conjunction search tasks. Fluctuations in gaze during these search tasks were indeed fractal. Furthermore, the degree of fractality predicted decreases in reaction time on a trial-by-trial basis. We propose that fractality may play a key role in explaining the efficacy of perceptual exploration.

  6. Verbal task demands are key in explaining the relationship between paired-associate learning and reading ability.

    PubMed

    Clayton, Francina J; Sears, Claire; Davis, Alice; Hulme, Charles

    2018-07-01

    Paired-associate learning (PAL) tasks measure the ability to form a novel association between a stimulus and a response. Performance on such tasks is strongly associated with reading ability, and there is increasing evidence that verbal task demands may be critical in explaining this relationship. The current study investigated the relationships between different forms of PAL and reading ability. A total of 97 children aged 8-10 years completed a battery of reading assessments and six different PAL tasks (phoneme-phoneme, visual-phoneme, nonverbal-nonverbal, visual-nonverbal, nonword-nonword, and visual-nonword) involving both familiar phonemes and unfamiliar nonwords. A latent variable path model showed that PAL ability is captured by two correlated latent variables: auditory-articulatory and visual-articulatory. The auditory-articulatory latent variable was the stronger predictor of reading ability, providing support for a verbal account of the PAL-reading relationship. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Processes Involving Perceived Instructional Support, Task Value, and Engagement in Graduate Education

    ERIC Educational Resources Information Center

    Marchand, Gwen C.; Gutierrez, Antonio P.

    2017-01-01

    The purpose of this study was to investigate the relations among perceived instructional support (provision of relevance and involvement), subjective task value beliefs (utility, attainment, and intrinsic value), and engagement (behavioral and emotional) over the course of a semester for graduate students enrolled in an introductory research…

  8. Neuronal responses to target onset in oculomotor and somatomotor parietal circuits differ markedly in a choice task.

    PubMed

    Kubanek, J; Wang, C; Snyder, L H

    2013-11-01

    We often look at and sometimes reach for visible targets. Looking at a target is fast and relatively easy. By comparison, reaching for an object is slower and is associated with a larger cost. We hypothesized that, as a result of these differences, abrupt visual onsets may drive the circuits involved in saccade planning more directly and with less intermediate regulation than the circuits involved in reach planning. To test this hypothesis, we recorded discharge activity of neurons in the parietal oculomotor system (area LIP) and in the parietal somatomotor system (area PRR) while monkeys performed a visually guided movement task and a choice task. We found that in the visually guided movement task LIP neurons show a prominent transient response to target onset. PRR neurons also show a transient response, although this response is reduced in amplitude, is delayed, and has a slower rise time compared with LIP. A more striking difference is observed in the choice task. The transient response of PRR neurons is almost completely abolished and replaced with a slow buildup of activity, while the LIP response is merely delayed and reduced in amplitude. Our findings suggest that the oculomotor system is more closely and obligatorily coupled to the visual system, whereas the somatomotor system operates in a more discriminating manner.

  9. Functional magnetic resonance imaging of visual object construction and shape discrimination : relations among task, hemispheric lateralization, and gender.

    PubMed

    Georgopoulos, A P; Whang, K; Georgopoulos, M A; Tagaris, G A; Amirikian, B; Richter, W; Kim, S G; Uğurbil, K

    2001-01-01

    , the FIT distribution was, overall, more anterior and inferior than that of the SAME task. A detailed analysis of the counts and spatial distributions of activated pixels was carried out for 15 brain areas (all in the cerebral cortex) in which a consistent activation (in > or = 3 subjects) was observed (n = 323 activated pixels). We found the following. Except for the inferior temporal gyrus, which was activated exclusively in the FIT task, all other areas showed activation in both tasks but to different extents. Based on the extent of activation, areas fell within two distinct groups (FIT or SAME) depending on which pixel count (i.e., FIT or SAME) was greater. The FIT group consisted of the following areas, in decreasing FIT/SAME order (brackets indicate ties): GTi, GTs, GC, GFi, GFd, [GTm, GF], GO. The SAME group consisted of the following areas, in decreasing SAME/FIT order : GOi, LPs, Sca, GPrC, GPoC, [GFs, GFm]. These results indicate that there are distributed, graded, and partially overlapping patterns of activation during performance of the two tasks. We attribute these overlapping patterns of activation to the engagement of partially shared processes. Activated pixels clustered to three types of clusters : FIT-only (111 pixels), SAME-only (97 pixels), and FIT + SAME (115 pixels). Pixels contained in FIT-only and SAME-only clusters were distributed approximately equally between the left and right hemispheres, whereas pixels in the SAME + FIT clusters were located mostly in the left hemisphere. With respect to gender, the left-right distribution of activated pixels was very similar in women and men for the SAME-only and FIT + SAME clusters but differed for the FIT-only case in which there was a prominent left side preponderance for women, in contrast to a right side preponderance for men. We conclude that (a) cortical mechanisms common for processing visual object construction and discrimination involve mostly the left hemisphere, (b) cortical mechanisms

  10. Visual Discrimination and Motor Reproduction of Movement by Individuals with Mental Retardation.

    ERIC Educational Resources Information Center

    Shinkfield, Alison J.; Sparrow, W. A.; Day, R. H.

    1997-01-01

    Visual discrimination and motor reproduction tasks involving computer-simulated arm movements were administered to 12 adults with mental retardation and a gender-matched control group. The purpose was to examine whether inadequacies in visual perception account for the poorer motor performance of this population. Results indicate both perceptual…

  11. Ocular dynamics and visual tracking performance after Q-switched laser exposure

    NASA Astrophysics Data System (ADS)

    Zwick, Harry; Stuck, Bruce E.; Lund, David J.; Nawim, Maqsood

    2001-05-01

    In previous investigations of q-switched laser retinal exposure in awake task oriented non-human primates (NHPs), the threshold for retinal damage occurred well below that of the threshold for permanent visual function loss. Visual function measures used in these studies involved measures of visual acuity and contrast sensitivity. In the present study, we examine the same relationship for q-switched laser exposure using a visual performance task, where task dependency involves more parafoveal than foveal retina. NHPs were trained on a visual pursuit motor tracking performance task that required maintaining a small HeNe laser spot (0.3 degrees) centered in a slowly moving (0.5deg/sec) annulus. When NHPs reliably produced visual target tracking efficiencies > 80%, single q-switched laser exposures (7 nsec) were made coaxially with the line of sight of the moving target. An infrared camera imaged the pupil during exposure to obtain the pupillary response to the laser flash. Retinal images were obtained with a scanning laser ophthalmoscope 3 days post exposure under ketamine and nembutol anesthesia. Q-switched visible laser exposures at twice the damage threshold produced small (about 50mm) retinal lesions temporal to the fovea; deficits in NHP visual pursuit tracking were transient, demonstrating full recovery to baseline within a single tracking session. Post exposure analysis of the pupillary response demonstrated that the exposure flash entered the pupil, followed by 90 msec refractory period and than a 12 % pupillary contraction within 1.5 sec from the onset of laser exposure. At 6 times the morphological threshold damage level for 532 nm q-switched exposure, longer term losses in NHP pursuit tracking performance were observed. In summary, q-switched laser exposure appears to have a higher threshold for permanent visual performance loss than the corresponding threshold to produce retinal threshold injury. Mechanisms of neural plasticity within the retina and at

  12. Visual attention and emotional memory: recall of aversive pictures is partially mediated by concurrent task performance.

    PubMed

    Pottage, Claire L; Schaefer, Alexandre

    2012-02-01

    The emotional enhancement of memory is often thought to be determined by attention. However, recent evidence using divided attention paradigms suggests that attention does not play a significant role in the formation of memories for aversive pictures. We report a study that investigated this question using a paradigm in which participants had to encode lists of randomly intermixed negative and neutral pictures under conditions of full attention and divided attention followed by a free recall test. Attention was divided by a highly demanding concurrent task tapping visual processing resources. Results showed that the advantage in recall for aversive pictures was still present in the DA condition. However, mediation analyses also revealed that concurrent task performance significantly mediated the emotional enhancement of memory under divided attention. This finding suggests that visual attentional processes play a significant role in the formation of emotional memories. PsycINFO Database Record (c) 2012 APA, all rights reserved

  13. Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.

    PubMed

    Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein

    2012-10-15

    Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. When does incivility lead to counterproductive work behavior? Roles of job involvement, task interdependence, and gender.

    PubMed

    Welbourne, Jennifer L; Sariol, Ana M

    2017-04-01

    This research investigated the conditions under which exposure to incivility at work was associated with engaging in counterproductive work behavior (CWB). Drawing from stressor-strain and coping frameworks, we predicted that experienced incivility would be associated with engaging in production deviance and withdrawal behavior, and that these relationships would be strongest for employees who had high levels of job involvement and worked under task interdependent conditions. Gender differences in these effects were also investigated. A sample of 250 United States full-time employees from various occupations completed 2 waves (timed 6 weeks apart) of an online survey. Results indicate that employees with high job involvement were more likely to engage in production deviance and withdrawal behavior following exposure to incivility than were employees with low job involvement. The moderating effect of task interdependence varied by gender, such that the relationship between incivility and CWB was strengthened under high task interdependence for female employees, but weakened under high task interdependence for male employees. These findings highlight that certain work conditions can increase employees' susceptibility to the impacts of incivility, leading to harmful outcomes for organizations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    PubMed

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus

  16. Role for the M1 Muscarinic Acetylcholine Receptor in Top-Down Cognitive Processing Using a Touchscreen Visual Discrimination Task in Mice.

    PubMed

    Gould, R W; Dencker, D; Grannan, M; Bubser, M; Zhan, X; Wess, J; Xiang, Z; Locuson, C; Lindsley, C W; Conn, P J; Jones, C K

    2015-10-21

    The M1 muscarinic acetylcholine receptor (mAChR) subtype has been implicated in the underlying mechanisms of learning and memory and represents an important potential pharmacotherapeutic target for the cognitive impairments observed in neuropsychiatric disorders such as schizophrenia. Patients with schizophrenia show impairments in top-down processing involving conflict between sensory-driven and goal-oriented processes that can be modeled in preclinical studies using touchscreen-based cognition tasks. The present studies used a touchscreen visual pairwise discrimination task in which mice discriminated between a less salient and a more salient stimulus to assess the influence of the M1 mAChR on top-down processing. M1 mAChR knockout (M1 KO) mice showed a slower rate of learning, evidenced by slower increases in accuracy over 12 consecutive days, and required more days to acquire (achieve 80% accuracy) this discrimination task compared to wild-type mice. In addition, the M1 positive allosteric modulator BQCA enhanced the rate of learning this discrimination in wild-type, but not in M1 KO, mice when BQCA was administered daily prior to testing over 12 consecutive days. Importantly, in discriminations between stimuli of equal salience, M1 KO mice did not show impaired acquisition and BQCA did not affect the rate of learning or acquisition in wild-type mice. These studies are the first to demonstrate performance deficits in M1 KO mice using touchscreen cognitive assessments and enhanced rate of learning and acquisition in wild-type mice through M1 mAChR potentiation when the touchscreen discrimination task involves top-down processing. Taken together, these findings provide further support for M1 potentiation as a potential treatment for the cognitive symptoms associated with schizophrenia.

  17. Signal detection theory applied to three visual search tasks--identification, yes/no detection and localization.

    PubMed

    Cameron, E Leslie; Tai, Joanna C; Eckstein, Miguel P; Carrasco, Marisa

    2004-01-01

    Adding distracters to a display impairs performance on visual tasks (i.e. the set-size effect). While keeping the display characteristics constant, we investigated this effect in three tasks: 2 target identification, yes-no detection with 2 targets, and 8-alternative localization. A Signal Detection Theory (SDT) model, tailored for each task, accounts for the set-size effects observed in identification and localization tasks, and slightly under-predicts the set-size effect in a detection task. Given that sensitivity varies as a function of spatial frequency (SF), we measured performance in each of these three tasks in neutral and peripheral precue conditions for each of six spatial frequencies (0.5-12 cpd). For all spatial frequencies tested, performance on the three tasks decreased as set size increased in the neutral precue condition, and the peripheral precue reduced the effect. Larger set-size effects were observed at low SFs in the identification and localization tasks. This effect can be described using the SDT model, but was not predicted by it. For each of these tasks we also established the extent to which covert attention modulates performance across a range of set sizes. A peripheral precue substantially diminished the set-size effect and improved performance, even at set size 1. These results provide support for distracter exclusion, and suggest that signal enhancement may also be a mechanism by which covert attention can impose its effect.

  18. Visual event-related potential changes in multiple system atrophy: delayed N2 latency in selective attention to a color task.

    PubMed

    Kamitani, Toshiaki; Kuroiwa, Yoshiyuki

    2009-01-01

    Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.

  19. Different Levels of Food Restriction Reveal Genotype-Specific Differences in Learning a Visual Discrimination Task

    PubMed Central

    Makowiecki, Kalina; Hammond, Geoff; Rodger, Jennifer

    2012-01-01

    In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80–90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW). We used adult wildtype (WT; C57Bl/6j) and knockout (ephrin-A2−/−) mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets) they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2−/− mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies. PMID:23144936

  20. Different levels of food restriction reveal genotype-specific differences in learning a visual discrimination task.

    PubMed

    Makowiecki, Kalina; Hammond, Geoff; Rodger, Jennifer

    2012-01-01

    In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80-90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW). We used adult wildtype (WT; C57Bl/6j) and knockout (ephrin-A2⁻/⁻) mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets) they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2⁻/⁻ mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies.

  1. Effects of spatial congruency on saccade and visual discrimination performance in a dual-task paradigm.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2014-12-01

    The present study investigated the coupling of selection-for-perception and selection-for-action during saccadic eye movement planning in three dual-task experiments. We focused on the effects of spatial congruency of saccade target (ST) location and discrimination target (DT) location and the time between ST-cue and Go-signal (SOA) on saccadic eye movement performance. In two experiments, participants performed a visual discrimination task at a cued location while programming a saccadic eye movement to a cued location. In the third experiment, the discrimination task was not cued and appeared at a random location. Spatial congruency of ST-location and DT-location resulted in enhanced perceptual performance irrespective of SOA. Perceptual performance in spatially incongruent trials was above chance, but only when the DT-location was cued. Saccade accuracy and precision were also affected by spatial congruency showing superior performance when the ST- and DT-location coincided. Saccade latency was only affected by spatial congruency when the DT-cue was predictive of the ST-location. Moreover, saccades consistently curved away from the incongruent DT-locations. Importantly, the effects of spatial congruency on saccade parameters only occurred when the DT-location was cued; therefore, results from experiments 1 and 2 are due to the endogenous allocation of attention to the DT-location and not caused by the salience of the probe. The SOA affected saccade latency showing decreasing latencies with increasing SOA. In conclusion, our results demonstrate that visuospatial attention can be voluntarily distributed upon spatially distinct perceptual and motor goals in dual-task situations, resulting in a decline of visual discrimination and saccade performance.

  2. Visualizing surgical quality data with treemaps.

    PubMed

    Hugine, Akilah L; Guerlain, Stephanie A; Turrentine, Florence E

    2014-09-01

    Treemaps are space-constrained visualizations for displaying hierarchical data structures using nested rectangles. The visualization allows large amounts of data to be examined in one display. The objective of this research was to examine the effects of using treemap visualizations to help surgeons assess surgical quality data from the American College of Surgeons created the National Surgical Quality Improvement Program database in a quick and timely manner. A controlled human subjects experiment was conducted to assess the ability of individuals to make quick and accurate judgments on surgery data by visualizing a treemap, with data hierarchically displayed by surgeon group, surgeon, and patient. Participants were given 20 task questions to complete involving examining the treemap and comparing surgeons' patients based on outcomes (dead or alive) and length of stay days. The outcomes measured were error (incorrect or correct) and task completion time. 120 participants completed 20 task questions for a total of 2400 responses. The main effects of layout and node size were found to be significant for absolute error, P < 0.0505 and P < 0.0185, respectively. The average judgment time to complete a task was 24 s with an accuracy rate of approximately 68%. This study served as a proof of concept to determine if treemaps could be beneficial in assessing surgical data retrospectively by allowing surgeons and healthcare administrators to make quick visual judgments. The study found that factors about the layout design affect judgment performance. Future research is needed to examine whether implementing the treemap within a dashboard system will improve on judgment accuracy for surgical quality questions. Published by Elsevier Inc.

  3. The role of the human pulvinar in visual attention and action: evidence from temporal-order judgment, saccade decision, and antisaccade tasks.

    PubMed

    Arend, Isabel; Machado, Liana; Ward, Robert; McGrath, Michelle; Ro, Tony; Rafal, Robert D

    2008-01-01

    The pulvinar nucleus of the thalamus has been considered as a key structure for visual attention functions (Grieve, K.L. et al. (2000). Trends Neurosci., 23: 35-39; Shipp, S. (2003). Philos. Trans. R. Soc. Lond. B Biol. Sci., 358(1438): 1605-1624). During the past several years, we have studied the role of the human pulvinar in visual attention and oculomotor behaviour by testing a small group of patients with unilateral pulvinar lesions. Here we summarize some of these findings, and present new evidence for the role of this structure in both eye movements and visual attention through two versions of a temporal-order judgment task and an antisaccade task. Pulvinar damage induces an ipsilesional bias in perceptual temporal-order judgments and in saccadic decision, and also increases the latency of antisaccades away from contralesional targets. The demonstration that pulvinar damage affects both attention and oculomotor behaviour highlights the role of this structure in the integration of visual and oculomotor signals and, more generally, its role in flexibly linking visual stimuli with context-specific motor responses.

  4. Productivity associated with visual status of computer users.

    PubMed

    Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W

    2004-01-01

    The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.

  5. Preattentive visual search and perceptual grouping in schizophrenia.

    PubMed

    Carr, V J; Dewis, S A; Lewin, T J

    1998-06-15

    To help determine whether patients with schizophrenia show deficits in the stimulus-based aspects of preattentive processing, we undertook a series of experiments within the framework of feature integration theory. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing parallel and serial information processing (Experiment 1) and a task which examined the effects of perceptual grouping on visual search strategies (Experiment 2). We also assessed current symptomatology and its relationship to task performance. While the schizophrenia subjects had longer reaction times in Experiment 1, their overall pattern of performance across both experimental tasks was similar to that of the control subjects, and generally unrelated to current symptomatology. Predictions from feature integration theory about the impact of varying display size (Experiment 1) and number of perceptual groups (Experiment 2) on the detection of feature and conjunction targets were strongly supported. This study revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics. While subject and task characteristics may partially account for differences between this and previous studies, it is more likely that preattentive processing abnormalities in schizophrenia may occur only under conditions involving selected 'top-down' factors such as context and meaning.

  6. The wisdom of crowds for visual search

    PubMed Central

    Juni, Mordechai Z.; Eckstein, Miguel P.

    2017-01-01

    Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision. PMID:28490500

  7. Effect of task set-modulating attentional capture depends on the distractor cost in visual search: evidence from N2pc.

    PubMed

    Zhao, Dandan; Liang, Shengnan; Jin, Zhenlan; Li, Ling

    2014-07-09

    Previous studies have confirmed that attention can be modulated by the current task set while involuntarily captured by salient items. However, little is known on which factors the modulation of attentional capture is dependent on when the same stimuli with different task sets are presented. In the present study, participants conducted two visual search tasks with the same search arrays by varying target and distractor settings (color singleton as target, onset singleton as distractor, named as color task, and vice versa). Ipsilateral and contralateral color distractors resulted in two different relative saliences in two tasks, respectively. Both reaction times (RTs) and N2-posterior-contralateral (N2pc) results showed that there was no difference between ipsilateral and contralateral color distractors in the onset task. However, both RTs and the latency of N2pc showed a delay to the ipsilateral onset distractor compared with the contralateral onset distractor. Moreover, the N2pc observed under the contralateral distractor condition in the color task was reversed, and its amplitude was attenuated. On the basis of these results, we proposed a parameter called distractor cost (DC), computed by subtracting RTs under the contralateral distractor condition from the ipsilateral condition. The results suggest that an enhanced DC might be related to the modification of N2pc in searching for the color target. Taken together, these findings provide evidence that the effect of task set-modulating attentional capture in visual search is related to the DC.

  8. Validity and reliability of an online visual-spatial working memory task for self-reliant administration in school-aged children.

    PubMed

    Van de Weijer-Bergsma, Eva; Kroesbergen, Evelyn H; Prast, Emilie J; Van Luit, Johannes E H

    2015-09-01

    Working memory is an important predictor of academic performance, and of math performance in particular. Most working memory tasks depend on one-to-one administration by a testing assistant, which makes the use of such tasks in large-scale studies time-consuming and costly. Therefore, an online, self-reliant visual-spatial working memory task (the Lion game) was developed for primary school children (6-12 years of age). In two studies, the validity and reliability of the Lion game were investigated. The results from Study 1 (n = 442) indicated satisfactory six-week test-retest reliability, excellent internal consistency, and good concurrent and predictive validity. The results from Study 2 (n = 5,059) confirmed the results on the internal consistency and predictive validity of the Lion game. In addition, multilevel analysis revealed that classroom membership influenced Lion game scores. We concluded that the Lion game is a valid and reliable instrument for the online computerized and self-reliant measurement of visual-spatial working memory (i.e., updating).

  9. Attentional demands of movement observation as tested by a dual task approach.

    PubMed

    Saucedo Marquez, Cinthia M; Ceux, Tanja; Wenderoth, Nicole

    2011-01-01

    Movement observation (MO) has been shown to activate the motor cortex of the observer as indicated by an increase of corticomotor excitability for muscles involved in the observed actions. Moreover, behavioral work has strongly suggested that this process occurs in a near-automatic manner. Here we further tested this proposal by applying transcranial magnetic stimulation (TMS) when subjects observed how an actor lifted objects of different weights as a single or a dual task. The secondary task was either an auditory discrimination task (experiment 1) or a visual discrimination task (experiment 2). In experiment 1, we found that corticomotor excitability reflected the force requirements indicated in the observed movies (i.e. higher responses when the actor had to apply higher forces). Interestingly, this effect was found irrespective of whether MO was performed as a single or a dual task. By contrast, no such systematic modulations of corticomotor excitability were observed in experiment 2 when visual distracters were present. We conclude that interference effects might arise when MO is performed while competing visual stimuli are present. However, when a secondary task is situated in a different modality, neural responses are in line with the notion that the observers motor system responds in a near-automatic manner. This suggests that MO is a task with very low cognitive demands which might be a valuable supplement for rehabilitation training, particularly, in the acute phase after the incident or in patients suffering from attention deficits. However, it is important to keep in mind that visual distracters might interfere with the neural response in M1.

  10. Low target prevalence is a stubborn source of errors in visual search tasks

    PubMed Central

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2009-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575

  11. Prefrontal electroencephalographic activity during the working memory processes involved in a sexually motivated task in male rats.

    PubMed

    Hernández-González, Marisela; Almanza-Sepúlveda, Mayra Linné; Olvera-Cortés, María Esther; Gutiérrez-Guzmán, Blanca Erika; Guevara, Miguel Angel

    2012-08-01

    The prefrontal cortex is involved in working memory functions, and several studies using food or drink as rewards have demonstrated that the rat is capable of performing tasks that involve working memory. Sexual activity is another highly-rewarding, motivated behaviour that has proven to be an efficient incentive in classical operant tasks. The objective of this study was to determine whether the functional activity of the medial prefrontal cortex (mPFC) changes in relation to the working memory processes involved in a sexually motivated task performed in male rats. Thus, male Wistar rats implanted in the mPFC were subjected to a nonmatching-to-sample task in a T-maze using sexual interaction as a reinforcer during a 4-day training period. On the basis of their performance during training, the rats were classified as 'good-learners' or 'bad-learners'. Only the good-learner rats showed an increase in the absolute power of the 8-13 Hz band during both the sample and test runs; a finding that could be related to learning of the working memory elements entailed in the task. During the maintenance phase only (i.e., once the rule had been learned well), the good-learner rats also showed an increased correlation of the 8-13 Hz band during the sample run, indicating that a high degree of coupling between the prefrontal cortices is necessary for the processing required to allow the rats to make correct decisions in the maintenance phase. Taken together, these data show that mPFC activity changes in relation to the working memory processes involved in a sexually motivated task in male rats.

  12. Does the walking task matter? Influence of different walking conditions on dual-task performances in young and older persons.

    PubMed

    Beurskens, Rainer; Bock, Otmar

    2013-12-01

    Previous literature suggests that age-related deficits of dual-task walking are particularly pronounced with second tasks that require continuous visual processing. Here we evaluate whether the difficulty of the walking task matters as well. To this end, participants were asked to walk along a straight pathway of 20m length in four different walking conditions: (a) wide path and preferred pace; (b) narrow path and preferred pace, (c) wide path and fast pace, (d) obstacled wide path and preferred pace. Each condition was performed concurrently with a task requiring visual processing or fine motor control, and all tasks were also performed alone which allowed us to calculate the dual-task costs (DTC). Results showed that the age-related increase of DTC is substantially larger with the visually demanding than with the motor-demanding task, more so when walking on a narrow or obstacled path. We attribute these observations to the fact that visual scanning of the environment becomes more crucial when walking in difficult terrains: the higher visual demand of those conditions accentuates the age-related deficits in coordinating them with a visual non-walking task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Handwriting generates variable visual input to facilitate symbol learning

    PubMed Central

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  14. The visual attention span deficit in dyslexia is visual and not verbal.

    PubMed

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  15. No Evidence for a Saccadic Range Effect for Visually Guided and Memory-Guided Saccades in Simple Saccade-Targeting Tasks

    PubMed Central

    Vitu, Françoise; Engbert, Ralf; Kliegl, Reinhold

    2016-01-01

    Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task. PMID:27658191

  16. What Top-Down Task Sets Do for Us: An ERP Study on the Benefits of Advance Preparation in Visual Search

    ERIC Educational Resources Information Center

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-01-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features.…

  17. Linear and Non-Linear Visual Feature Learning in Rat and Humans

    PubMed Central

    Bossens, Christophe; Op de Beeck, Hans P.

    2016-01-01

    The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201

  18. Working memory in wayfinding-a dual task experiment in a virtual city.

    PubMed

    Meilinger, Tobias; Knauff, Markus; Bülthoff, Heinrich H

    2008-06-01

    This study examines the working memory systems involved in human wayfinding. In the learning phase, 24 participants learned two routes in a novel photorealistic virtual environment displayed on a 220° screen while they were disrupted by a visual, a spatial, a verbal, or-in a control group-no secondary task. In the following wayfinding phase, the participants had to find and to "virtually walk" the two routes again. During this wayfinding phase, a number of dependent measures were recorded. This research shows that encoding wayfinding knowledge interfered with the verbal and with the spatial secondary task. These interferences were even stronger than the interference of wayfinding knowledge with the visual secondary task. These findings are consistent with a dual-coding approach of wayfinding knowledge. 2008 Cognitive Science Society, Inc.

  19. Dexterity: A MATLAB-based analysis software suite for processing and visualizing data from tasks that measure arm or forelimb function.

    PubMed

    Butensky, Samuel D; Sloan, Andrew P; Meyers, Eric; Carmel, Jason B

    2017-07-15

    Hand function is critical for independence, and neurological injury often impairs dexterity. To measure hand function in people or forelimb function in animals, sensors are employed to quantify manipulation. These sensors make assessment easier and more quantitative and allow automation of these tasks. While automated tasks improve objectivity and throughput, they also produce large amounts of data that can be burdensome to analyze. We created software called Dexterity that simplifies data analysis of automated reaching tasks. Dexterity is MATLAB software that enables quick analysis of data from forelimb tasks. Through a graphical user interface, files are loaded and data are identified and analyzed. These data can be annotated or graphed directly. Analysis is saved, and the graph and corresponding data can be exported. For additional analysis, Dexterity provides access to custom scripts created by other users. To determine the utility of Dexterity, we performed a study to evaluate the effects of task difficulty on the degree of impairment after injury. Dexterity analyzed two months of data and allowed new users to annotate the experiment, visualize results, and save and export data easily. Previous analysis of tasks was performed with custom data analysis, requiring expertise with analysis software. Dexterity made the tools required to analyze, visualize and annotate data easy to use by investigators without data science experience. Dexterity increases accessibility to automated tasks that measure dexterity by making analysis of large data intuitive, robust, and efficient. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Brain correlates of automatic visual change detection.

    PubMed

    Cléry, H; Andersson, F; Fonlupt, P; Gomot, M

    2013-07-15

    A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Attentional load and sensory competition in human vision: modulation of fMRI responses by load at fixation during task-irrelevant stimulation in the peripheral visual field.

    PubMed

    Schwartz, Sophie; Vuilleumier, Patrik; Hutton, Chloe; Maravita, Angelo; Dolan, Raymond J; Driver, Jon

    2005-06-01

    Perceptual suppression of distractors may depend on both endogenous and exogenous factors, such as attentional load of the current task and sensory competition among simultaneous stimuli, respectively. We used functional magnetic resonance imaging (fMRI) to compare these two types of attentional effects and examine how they may interact in the human brain. We varied the attentional load of a visual monitoring task performed on a rapid stream at central fixation without altering the central stimuli themselves, while measuring the impact on fMRI responses to task-irrelevant peripheral checkerboards presented either unilaterally or bilaterally. Activations in visual cortex for irrelevant peripheral stimulation decreased with increasing attentional load at fixation. This relative decrease was present even in V1, but became larger for successive visual areas through to V4. Decreases in activation for contralateral peripheral checkerboards due to higher central load were more pronounced within retinotopic cortex corresponding to 'inner' peripheral locations relatively near the central targets than for more eccentric 'outer' locations, demonstrating a predominant suppression of nearby surround rather than strict 'tunnel vision' during higher task load at central fixation. Contralateral activations for peripheral stimulation in one hemifield were reduced by competition with concurrent stimulation in the other hemifield only in inferior parietal cortex, not in retinotopic areas of occipital visual cortex. In addition, central attentional load interacted with competition due to bilateral versus unilateral peripheral stimuli specifically in posterior parietal and fusiform regions. These results reveal that task-dependent attentional load, and interhemifield stimulus-competition, can produce distinct influences on the neural responses to peripheral visual stimuli within the human visual system. These distinct mechanisms in selective visual processing may be integrated within

  2. CHARACTERIZATION OF THE EFFECTS OF INHALED PERCHLOROETHYLENE ON SUSTAINED ATTENTION IN RATS PERFORMING A VISUAL SIGNAL DETECTION TASK

    EPA Science Inventory

    The aliphatic hydrocarbon perchloroethyelene (PCE) has been associated with neurobehavioral dysfunction including reduced attention in humans. The current study sought to assess the effects of inhaled PCE on sustained attention in rats performing a visual signal detection task (S...

  3. Eye Movement Analysis and Cognitive Assessment. The Use of Comparative Visual Search Tasks in a Non-immersive VR Application.

    PubMed

    Rosa, Pedro J; Gamito, Pedro; Oliveira, Jorge; Morais, Diogo; Pavlovic, Matthew; Smyth, Olivia; Maia, Inês; Gomes, Tiago

    2017-03-23

    An adequate behavioral response depends on attentional and mnesic processes. When these basic cognitive functions are impaired, the use of non-immersive Virtual Reality Applications (VRAs) can be a reliable technique for assessing the level of impairment. However, most non-immersive VRAs use indirect measures to make inferences about visual attention and mnesic processes (e.g., time to task completion, error rate). To examine whether the eye movement analysis through eye tracking (ET) can be a reliable method to probe more effectively where and how attention is deployed and how it is linked with visual working memory during comparative visual search tasks (CVSTs) in non-immersive VRAs. The eye movements of 50 healthy participants were continuously recorded while CVSTs, selected from a set of cognitive tasks in the Systemic Lisbon Battery (SLB). Then a VRA designed to assess of cognitive impairments were randomly presented. The total fixation duration, the number of visits in the areas of interest and in the interstimulus space, along with the total execution time was significantly different as a function of the Mini Mental State Examination (MMSE) scores. The present study demonstrates that CVSTs in SLB, when combined with ET, can be a reliable and unobtrusive method for assessing cognitive abilities in healthy individuals, opening it to potential use in clinical samples.

  4. Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.

    PubMed

    Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara

    2017-01-01

    Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.

  5. Differentiation of involved and uninvolved psoriatic skin from healthy skin using noninvasive visual, colorimeter and evaporimeter methods.

    PubMed

    Pershing, L K; Bakhtian, S; Wright, E D; Rallis, T M

    1995-08-01

    Uninvolved skin of psoriasis may not be entirely normal. The object was to characterize healthy, uninvolved psoriatic skin and lesional skin by biophysical methods. Involved and uninvolved psoriatic and age-gender matched healthy skin was measured objectively with a colorimeter and evaporimeter and subjectively with visual assessment in 14 subjects. Visual assessment of erythema (E), scaling (S) and induration (I) as well as the target lesion score at the involved psoriatic skin sites were significantly elevated (p<0.05) above uninvolved psoriatic or healthy skin sites. No difference between uninvolved psoriatic and healthy skin was measured visually. Transepidermal water loss at involved psoriatic skin >uninvolved psoriatic skin >healthy skin (p<0.05). Objective assessment of skin color in 3 color scales, L*, a*, and b*, differentiated involved and uninvolved psoriatic skin from healthy skin sites. Involved psoriatic skin demonstrated higher (p<0.01) a-scale values and lower (p<0.01) L* and b* scale values than uninvolved psoriatic skin. Further, colorimeter L* and a* scale values at uninvolved psoriatic skin sites were lower and higher (p<0.05), respectively, than healthy skin. The individual chromameter parameters (L*, a*, b*) correlated well with the visual parameters (E, S and I). Composite colorimeter description (L*× b*)/a* significantly differentiated healthy skin from both involved and uninvolved psoriatic skin. These collective data highlight that even visually appearing uninvolved psoriatic skin is compromised compared with healthy skin. These objective, noninvasive but differential capabilities of the colorimeter and evaporimeter will aid in the mechanistic quantification of new psoriatic drug therapies and in conjuction with biochemical studies, add to understanding of the multifactorial pathogenesis of psoriasis.

  6. Validating Visual Cues In Flight Simulator Visual Displays

    NASA Astrophysics Data System (ADS)

    Aronson, Moses

    1987-09-01

    Currently evaluation of visual simulators are performed by either pilot opinion questionnaires or comparison of aircraft terminal performance. The approach here is to compare pilot performance in the flight simulator with a visual display to his performance doing the same visual task in the aircraft as an indication that the visual cues are identical. The A-7 Night Carrier Landing task was selected. Performance measures which had high pilot performance prediction were used to compare two samples of existing pilot performance data to prove that the visual cues evoked the same performance. The performance of four pilots making 491 night landing approaches in an A-7 prototype part task trainer were compared with the performance of 3 pilots performing 27 A-7E carrier landing qualification approaches on the CV-60 aircraft carrier. The results show that the pilots' performances were similar, therefore concluding that the visual cues provided in the simulator were identical to those provided in the real world situation. Differences between the flight simulator's flight characteristics and the aircraft have less of an effect than the pilots individual performances. The measurement parameters used in the comparison can be used for validating the visual display for adequacy for training.

  7. Supporting interruption management and multimodal interface design: three meta-analyses of task performance as a function of interrupting task modality.

    PubMed

    Lu, Sara A; Wickens, Christopher D; Prinet, Julie C; Hutchins, Shaun D; Sarter, Nadine; Sebok, Angelia

    2013-08-01

    The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.

  8. Scan Patterns Predict Sentence Production in the Cross-Modal Processing of Visual Scenes

    ERIC Educational Resources Information Center

    Coco, Moreno I.; Keller, Frank

    2012-01-01

    Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they…

  9. The Effect of Visual Information on the Manual Approach and Landing

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1982-01-01

    The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.

  10. The role of visual imagery in the retention of information from sentences.

    PubMed

    Drose, G S; Allen, G L

    1994-01-01

    We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.

  11. Behavioral evidence for inter-hemispheric cooperation during a lexical decision task: a divided visual field experiment.

    PubMed

    Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica

    2013-01-01

    HIGHLIGHTSThe redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific

  12. Behavioral evidence for inter-hemispheric cooperation during a lexical decision task: a divided visual field experiment

    PubMed Central

    Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica

    2013-01-01

    HIGHLIGHTS The redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific

  13. Alterations in task-induced activity and resting-state fluctuations in visual and DMN areas revealed in long-term meditators.

    PubMed

    Berkovich-Ohana, Aviva; Harel, Michal; Hahamy, Avital; Arieli, Amos; Malach, Rafael

    2016-07-15

    Recently we proposed that the information contained in spontaneously emerging (resting-state) fluctuations may reflect individually unique neuro-cognitive traits. One prediction of this conjecture, termed the "spontaneous trait reactivation" (STR) hypothesis, is that resting-state activity patterns could be diagnostic of unique personalities, talents and life-styles of individuals. Long-term meditators could provide a unique experimental group to test this hypothesis. Using fMRI we found that, during resting-state, the amplitude of spontaneous fluctuations in long-term mindfulness meditation (MM) practitioners was enhanced in the visual cortex and significantly reduced in the DMN compared to naïve controls. Importantly, during a visual recognition memory task, the MM group showed heightened visual cortex responsivity, concomitant with weaker negative responses in Default Mode Network (DMN) areas. This effect was also reflected in the behavioral performance, where MM practitioners performed significantly faster than the control group. Thus, our results uncover opposite changes in the visual and default mode systems in long-term meditators which are revealed during both rest and task. The results support the STR hypothesis and extend it to the domain of local changes in the magnitude of the spontaneous fluctuations. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Visual enhancements in pick-and-place tasks: Human operators controlling a simulated cylindrical manipulator

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Tendick, Frank; Stark, Lawrence

    1989-01-01

    A teleoperation simulator was constructed with vector display system, joysticks, and a simulated cylindrical manipulator, in order to quantitatively evaluate various display conditions. The first of two experiments conducted investigated the effects of perspective parameter variations on human operators' pick-and-place performance, using a monoscopic perspective display. The second experiment involved visual enhancements of the monoscopic perspective display, by adding a grid and reference lines, by comparison with visual enhancements of a stereoscopic display; results indicate that stereoscopy generally permits superior pick-and-place performance, but that monoscopy nevertheless allows equivalent performance when defined with appropriate perspective parameter values and adequate visual enhancements.

  15. Top-down modulation from inferior frontal junction to FEFs and intraparietal sulcus during short-term memory for visual features.

    PubMed

    Sneve, Markus H; Magnussen, Svein; Alnæs, Dag; Endestad, Tor; D'Esposito, Mark

    2013-11-01

    Visual STM of simple features is achieved through interactions between retinotopic visual cortex and a set of frontal and parietal regions. In the present fMRI study, we investigated effective connectivity between central nodes in this network during the different task epochs of a modified delayed orientation discrimination task. Our univariate analyses demonstrate that the inferior frontal junction (IFJ) is preferentially involved in memory encoding, whereas activity in the putative FEFs and anterior intraparietal sulcus (aIPS) remains elevated throughout periods of memory maintenance. We have earlier reported, using the same task, that areas in visual cortex sustain information about task-relevant stimulus properties during delay intervals [Sneve, M. H., Alnæs, D., Endestad, T., Greenlee, M. W., & Magnussen, S. Visual short-term memory: Activity supporting encoding and maintenance in retinotopic visual cortex. Neuroimage, 63, 166-178, 2012]. To elucidate the temporal dynamics of the IFJ-FEF-aIPS-visual cortex network during memory operations, we estimated Granger causality effects between these regions with fMRI data representing memory encoding/maintenance as well as during memory retrieval. We also investigated a set of control conditions involving active processing of stimuli not associated with a memory task and passive viewing. In line with the developing understanding of IFJ as a region critical for control processes with a possible initiating role in visual STM operations, we observed influence from IFJ to FEF and aIPS during memory encoding. Furthermore, FEF predicted activity in a set of higher-order visual areas during memory retrieval, a finding consistent with its suggested role in top-down biasing of sensory cortex.

  16. Time course influences transfer of visual perceptual learning across spatial location.

    PubMed

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Spatiotemporal dynamics of brain activity during the transition from visually guided to memory-guided force control

    PubMed Central

    Poon, Cynthia; Chin-Cottongim, Lisa G.; Coombes, Stephen A.; Corcos, Daniel M.

    2012-01-01

    It is well established that the prefrontal cortex is involved during memory-guided tasks whereas visually guided tasks are controlled in part by a frontal-parietal network. However, the nature of the transition from visually guided to memory-guided force control is not as well established. As such, this study examines the spatiotemporal pattern of brain activity that occurs during the transition from visually guided to memory-guided force control. We measured 128-channel scalp electroencephalography (EEG) in healthy individuals while they performed a grip force task. After visual feedback was removed, the first significant change in event-related activity occurred in the left central region by 300 ms, followed by changes in prefrontal cortex by 400 ms. Low-resolution electromagnetic tomography (LORETA) was used to localize the strongest activity to the left ventral premotor cortex and ventral prefrontal cortex. A second experiment altered visual feedback gain but did not require memory. In contrast to memory-guided force control, altering visual feedback gain did not lead to early changes in the left central and midline prefrontal regions. Decreasing the spatial amplitude of visual feedback did lead to changes in the midline central region by 300 ms, followed by changes in occipital activity by 400 ms. The findings show that subjects rely on sensorimotor memory processes involving left ventral premotor cortex and ventral prefrontal cortex after the immediate transition from visually guided to memory-guided force control. PMID:22696535

  18. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  19. Visually cued motor synchronization: modulation of fMRI activation patterns by baseline condition.

    PubMed

    Cerasa, Antonio; Hagberg, Gisela E; Bianciardi, Marta; Sabatini, Umberto

    2005-01-03

    A well-known issue in functional neuroimaging studies, regarding motor synchronization, is to design suitable control tasks able to discriminate between the brain structures involved in primary time-keeper functions and those related to other processes such as attentional effort. The aim of this work was to investigate how the predictability of stimulus onsets in the baseline condition modulates the activity in brain structures related to processes involved in time-keeper functions during the performance of a visually cued motor synchronization task (VM). The rational behind this choice derives from the notion that using different stimulus predictability can vary the subject's attention and the consequently neural activity. For this purpose, baseline levels of BOLD activity were obtained from 12 subjects during a conventional-baseline condition: maintained fixation of the visual rhythmic stimuli presented in the VM task, and a random-baseline condition: maintained fixation of visual stimuli occurring randomly. fMRI analysis demonstrated that while brain areas with a documented role in basic time processing are detected independent of the baseline condition (right cerebellum, bilateral putamen, left thalamus, left superior temporal gyrus, left sensorimotor cortex, left dorsal premotor cortex and supplementary motor area), the ventral premotor cortex, caudate nucleus, insula and inferior frontal gyrus exhibited a baseline-dependent activation. We conclude that maintained fixation of unpredictable visual stimuli can be employed in order to reduce or eliminate neural activity related to attentional components present in the synchronization task.

  20. Object representations in visual working memory change according to the task context.

    PubMed

    Balaban, Halely; Luria, Roy

    2016-08-01

    This study investigated whether an item's representation in visual working memory (VWM) can be updated according to changes in the global task context. We used a modified change detection paradigm, in which the items moved before the retention interval. In all of the experiments, we presented identical color-color conjunction items that were arranged to provide a common fate Gestalt grouping cue during their movement. Task context was manipulated by adding a condition highlighting either the integrated interpretation of the conjunction items or their individuated interpretation. We monitored the contralateral delay activity (CDA) as an online marker of VWM. Experiment 1 employed only a minimal global context; the conjunction items were integrated during their movement, but then were partially individuated, at a late stage of the retention interval. The same conjunction items were perfectly integrated in an integration context (Experiment 2). An individuation context successfully produced strong individuation, already during the movement, overriding Gestalt grouping cues (Experiment 3). In Experiment 4, a short priming of the individuation context managed to individuate the conjunction items immediately after the Gestalt cue was no longer available. Thus, the representations of identical items changed according to the task context, suggesting that VWM interprets incoming input according to global factors which can override perceptual cues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. The effects of stimulus modality and task integrality: Predicting dual-task performance and workload from single-task levels

    NASA Technical Reports Server (NTRS)

    Hart, S. G.; Shively, R. J.; Vidulich, M. A.; Miller, R. C.

    1986-01-01

    The influence of stimulus modality and task difficulty on workload and performance was investigated. The goal was to quantify the cost (in terms of response time and experienced workload) incurred when essentially serial task components shared common elements (e.g., the response to one initiated the other) which could be accomplished in parallel. The experimental tasks were based on the Fittsberg paradigm; the solution to a SternBERG-type memory task determines which of two identical FITTS targets are acquired. Previous research suggested that such functionally integrated dual tasks are performed with substantially less workload and faster response times than would be predicted by suming single-task components when both are presented in the same stimulus modality (visual). The physical integration of task elements was varied (although their functional relationship remained the same) to determine whether dual-task facilitation would persist if task components were presented in different sensory modalities. Again, it was found that the cost of performing the two-stage task was considerably less than the sum of component single-task levels when both were presented visually. Less facilitation was found when task elements were presented in different sensory modalities. These results suggest the importance of distinguishing between concurrent tasks that complete for limited resources from those that beneficially share common resources when selecting the stimulus modalities for information displays.

  2. Intelligence and information processing during a visual search task in children: an event-related potential study.

    PubMed

    Zhang, Qiong; Shi, Jiannong; Luo, Yuejia; Zhao, Daheng; Yang, Jie

    2006-05-15

    To investigate the differences in event-related potential parameters related to children's intelligence, we selected 15 individuals from an experimental class of intellectually gifted children and 13 intellectually average children as control to finish three types of visual search tasks (Chinese words, English letters and Arabic numbers). We recorded the electroencephalogram and calculated the peak latencies and amplitudes. Our results suggest comparatively increased P3 amplitudes and shorter P3 latencies in brighter individuals than in less intelligent individuals, but this expected neural efficiency effect interacted with task content. The differences were explained by a more spatially and temporally coordinated neural network for more intelligent children.

  3. Which technology to investigate visual perception in sport: video vs. virtual reality.

    PubMed

    Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit

    2015-02-01

    Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Effect of subliminal visual material on an auditory signal detection task.

    PubMed

    Moroney, E; Bross, M

    1984-02-01

    An experiment assessed the effect of subliminally embedded, visual material on an auditory detection task. 22 women and 19 men were presented tachistoscopically with words designated as "emotional" or "neutral" on the basis of prior GSRs and a Word Rating List under four conditions: (a) Unembedded Neutral, (b) Embedded Neutral, (c) Unembedded Emotional, and (d) Embedded Emotional. On each trial subjects made forced choices concerning the presence or absence of an auditory tone (1000 Hz) at threshold level; hits and false alarm rates were used to compute non-parametric indices for sensitivity (A') and response bias (B"). While over-all analyses of variance yielded no significant differences, further examination of the data suggests the presence of subliminally "receptive" and "non-receptive" subpopulations.

  5. Brain networks for visual creativity: a functional connectivity study of planning a visual artwork.

    PubMed

    De Pisapia, Nicola; Bacci, Francesca; Parrott, Danielle; Melcher, David

    2016-12-19

    Throughout recorded history, and across cultures, humans have made visual art. In recent years, the neural bases of creativity, including artistic creativity, have become a topic of interest. In this study we investigated the neural bases of the visual creative process with both professional artists and a group of control participants. We tested the idea that creativity (planning an artwork) would influence the functional connectivity between regions involved in the default mode network (DMN), implicated in divergent thinking and generating novel ideas, and the executive control network (EN), implicated in evaluating and selecting ideas. We measured functional connectivity with functional Magnetic Resonance Imaging (fMRI) during three different conditions: rest, visual imagery of the alphabet and planning an artwork to be executed immediately after the scanning session. Consistent with our hypothesis, we found stronger connectivity between areas of the DMN and EN during the creative task, and this difference was enhanced in professional artists. These findings suggest that creativity involves an expert balance of two brain networks typically viewed as being in opposition.

  6. Brain networks for visual creativity: a functional connectivity study of planning a visual artwork

    PubMed Central

    De Pisapia, Nicola; Bacci, Francesca; Parrott, Danielle; Melcher, David

    2016-01-01

    Throughout recorded history, and across cultures, humans have made visual art. In recent years, the neural bases of creativity, including artistic creativity, have become a topic of interest. In this study we investigated the neural bases of the visual creative process with both professional artists and a group of control participants. We tested the idea that creativity (planning an artwork) would influence the functional connectivity between regions involved in the default mode network (DMN), implicated in divergent thinking and generating novel ideas, and the executive control network (EN), implicated in evaluating and selecting ideas. We measured functional connectivity with functional Magnetic Resonance Imaging (fMRI) during three different conditions: rest, visual imagery of the alphabet and planning an artwork to be executed immediately after the scanning session. Consistent with our hypothesis, we found stronger connectivity between areas of the DMN and EN during the creative task, and this difference was enhanced in professional artists. These findings suggest that creativity involves an expert balance of two brain networks typically viewed as being in opposition. PMID:27991592

  7. Handwriting generates variable visual output to facilitate symbol learning.

    PubMed

    Li, Julia X; James, Karin H

    2016-03-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Visual Search Elicits the Electrophysiological Marker of Visual Working Memory

    PubMed Central

    Emrich, Stephen M.; Al-Aidroos, Naseem; Pratt, Jay; Ferber, Susanne

    2009-01-01

    Background Although limited in capacity, visual working memory (VWM) plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA), which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. Methodology/Principal Findings The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. Conclusions/Significance We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors. PMID:19956663

  9. Visual System Involvement in Patients with Newly Diagnosed Parkinson Disease.

    PubMed

    Arrigo, Alessandro; Calamuneri, Alessandro; Milardi, Demetrio; Mormina, Enricomaria; Rania, Laura; Postorino, Elisa; Marino, Silvia; Di Lorenzo, Giuseppe; Anastasi, Giuseppe Pio; Ghilardi, Maria Felice; Aragona, Pasquale; Quartarone, Angelo; Gaeta, Michele

    2017-12-01

    Purpose To assess intracranial visual system changes of newly diagnosed Parkinson disease in drug-naïve patients. Materials and Methods Twenty patients with newly diagnosed Parkinson disease and 20 age-matched control subjects were recruited. Magnetic resonance (MR) imaging (T1-weighted and diffusion-weighted imaging) was performed with a 3-T MR imager. White matter changes were assessed by exploring a white matter diffusion profile by means of diffusion-tensor imaging-based parameters and constrained spherical deconvolution-based connectivity analysis and by means of white matter voxel-based morphometry (VBM). Alterations in occipital gray matter were investigated by means of gray matter VBM. Morphologic analysis of the optic chiasm was based on manual measurement of regions of interest. Statistical testing included analysis of variance, t tests, and permutation tests. Results In the patients with Parkinson disease, significant alterations were found in optic radiation connectivity distribution, with decreased lateral geniculate nucleus V2 density (F, -8.28; P < .05), a significant increase in optic radiation mean diffusivity (F, 7.5; P = .014), and a significant reduction in white matter concentration. VBM analysis also showed a significant reduction in visual cortical volumes (P < .05). Moreover, the chiasmatic area and volume were significantly reduced (P < .05). Conclusion The findings show that visual system alterations can be detected in early stages of Parkinson disease and that the entire intracranial visual system can be involved. © RSNA, 2017 Online supplemental material is available for this article.

  10. Exploring the neural correlates of visual creativity

    PubMed Central

    Liew, Sook-Lei; Dandekar, Francesco

    2013-01-01

    Although creativity has been called the most important of all human resources, its neural basis is still unclear. In the current study, we used fMRI to measure neural activity in participants solving a visuospatial creativity problem that involves divergent thinking and has been considered a canonical right hemisphere task. As hypothesized, both the visual creativity task and the control task as compared to rest activated a variety of areas including the posterior parietal cortex bilaterally and motor regions, which are known to be involved in visuospatial rotation of objects. However, directly comparing the two tasks indicated that the creative task more strongly activated left hemisphere regions including the posterior parietal cortex, the premotor cortex, dorsolateral prefrontal cortex (DLPFC) and the medial PFC. These results demonstrate that even in a task that is specialized to the right hemisphere, robust parallel activity in the left hemisphere supports creative processing. Furthermore, the results support the notion that higher motor planning may be a general component of creative improvisation and that such goal-directed planning of novel solutions may be organized top-down by the left DLPFC and by working memory processing in the medial prefrontal cortex. PMID:22349801

  11. Goal-Directed Visual Processing Differentially Impacts Human Ventral and Dorsal Visual Representations

    PubMed Central

    2017-01-01

    Recent studies have challenged the ventral/“what” and dorsal/“where” two-visual-processing-pathway view by showing the existence of “what” and “where” information in both pathways. Is the two-pathway distinction still valid? Here, we examined how goal-directed visual information processing may differentially impact visual representations in these two pathways. Using fMRI and multivariate pattern analysis, in three experiments on human participants (57% females), by manipulating whether color or shape was task-relevant and how they were conjoined, we examined shape-based object category decoding in occipitotemporal and parietal regions. We found that object category representations in all the regions examined were influenced by whether or not object shape was task-relevant. This task effect, however, tended to decrease as task-relevant and irrelevant features were more integrated, reflecting the well-known object-based feature encoding. Interestingly, task relevance played a relatively minor role in driving the representational structures of early visual and ventral object regions. They were driven predominantly by variations in object shapes. In contrast, the effect of task was much greater in dorsal than ventral regions, with object category and task relevance both contributing significantly to the representational structures of the dorsal regions. These results showed that, whereas visual representations in the ventral pathway are more invariant and reflect “what an object is,” those in the dorsal pathway are more adaptive and reflect “what we do with it.” Thus, despite the existence of “what” and “where” information in both visual processing pathways, the two pathways may still differ fundamentally in their roles in visual information representation. SIGNIFICANCE STATEMENT Visual information is thought to be processed in two distinctive pathways: the ventral pathway that processes “what” an object is and the dorsal pathway

  12. Visual scan-path analysis with feature space transient fixation moments

    NASA Astrophysics Data System (ADS)

    Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong

    2003-05-01

    The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.

  13. Visual short-term memory load reduces retinotopic cortex response to contrast.

    PubMed

    Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli

    2012-11-01

    Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.

  14. Seeing without knowing: task relevance dissociates between visual awareness and recognition.

    PubMed

    Eitam, Baruch; Shoval, Roy; Yeshurun, Yaffa

    2015-03-01

    We demonstrate that task relevance dissociates between visual awareness and knowledge activation to create a state of seeing without knowing-visual awareness of familiar stimuli without recognizing them. We rely on the fact that in order to experience a Kanizsa illusion, participants must be aware of its inducers. While people can indicate the orientation of the illusory rectangle with great ease (signifying that they have consciously experienced the illusion's inducers), almost 30% of them could not report the inducers' color. Thus, people can see, in the sense of phenomenally experiencing, but not know, in the sense of recognizing what the object is or activating appropriate knowledge about it. Experiment 2 tests whether relevance-based selection operates within objects and shows that, contrary to the pattern of results found with features of different objects in our previous studies and replicated in Experiment 1, selection does not occur when both relevant and irrelevant features belong to the same object. We discuss these findings in relation to the existing theories of consciousness and to attention and inattentional blindness, and the role of cognitive load, object-based attention, and the use of self-reports as measures of awareness. © 2015 New York Academy of Sciences.

  15. No psychological effect of color context in a low level vision task

    PubMed Central

    Pedley, Adam; Wade, Alex R

    2013-01-01

    Background: A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Methods: Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task.  Results: A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η 2 = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. Discussion: We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas. PMID:25075280

  16. No psychological effect of color context in a low level vision task.

    PubMed

    Pedley, Adam; Wade, Alex R

    2013-01-01

    A remarkable series of recent papers have shown that colour can influence performance in cognitive tasks. In particular, they suggest that viewing a participant number printed in red ink or other red ancillary stimulus elements improves performance in tasks requiring local processing and impedes performance in tasks requiring global processing whilst the reverse is true for the colour blue. The tasks in these experiments require high level cognitive processing such as analogy solving or remote association tests and the chromatic effect on local vs. global processing is presumed to involve widespread activation of the autonomic nervous system. If this is the case, we might expect to see similar effects on all local vs. global task comparisons. To test this hypothesis, we asked whether chromatic cues also influence performance in tasks involving low level visual feature integration. Subjects performed either local (contrast detection) or global (form detection) tasks on achromatic dynamic Glass pattern stimuli. Coloured instructions, target frames and fixation points were used to attempt to bias performance to different task types. Based on previous literature, we hypothesised that red cues would improve performance in the (local) contrast detection task but would impede performance in the (global) form detection task.  A two-way, repeated measures, analysis of covariance (2×2 ANCOVA) with gender as a covariate, revealed no influence of colour on either task, F(1,29) = 0.289, p = 0.595, partial η (2) = 0.002. Additional analysis revealed no significant differences in only the first attempts of the tasks or in the improvement in performance between trials. We conclude that motivational processes elicited by colour perception do not influence neuronal signal processing in the early visual system, in stark contrast to their putative effects on processing in higher areas.

  17. Multisensory and modality specific processing of visual speech in different regions of the premotor cortex

    PubMed Central

    Callan, Daniel E.; Jones, Jeffery A.; Callan, Akiko

    2014-01-01

    Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with

  18. The Visual Cycle in the Inner Retina of Chicken and the Involvement of Retinal G-Protein-Coupled Receptor (RGR).

    PubMed

    Díaz, Nicolás M; Morera, Luis P; Tempesti, Tomas; Guido, Mario E

    2017-05-01

    The vertebrate retina contains typical photoreceptor (PR) cones and rods responsible for day/night vision, respectively, and intrinsically photosensitive retinal ganglion cells (ipRGCs) involved in the regulation of non-image-forming tasks. Rhodopsin/cone opsin photopigments in visual PRs or melanopsin (Opn4) in ipRGCs utilizes retinaldehyde as a chromophore. The retinoid regeneration process denominated as "visual cycle" involves the retinal pigment epithelium (RPE) or Müller glial cells. Opn4, on the contrary, has been characterized as a bi/tristable photopigment, in which a photon of one wavelength isomerizes 11-cis to all-trans retinal (Ral), with a second photon re-isomerizing it back. However, it is unknown how the chromophore is further metabolized in the inner retina. Nor is it yet clear whether an alternative secondary cycle occurs involving players such as the retinal G-protein-coupled receptor (RGR), a putative photoisomerase of unidentified inner retinal activity. Here, we investigated the role of RGR in retinoid photoisomerization in Opn4x (Xenopus ortholog) (+) RGC primary cultures free of RPE and other cells from chicken embryonic retinas. Opn4x (+) RGCs display significant photic responses by calcium fluorescent imaging and photoisomerize exogenous all-trans to 11-cis Ral and other retinoids. RGR was found to be expressed in developing retina and in primary cultures; when its expression was knocked down, the levels of 11-cis, all-trans Ral, and all-trans retinol in cultures exposed to light were significantly higher and those in all-trans retinyl esters lower than in dark controls. The results support a novel role for RGR in ipRGCs to modulate retinaldehyde levels in light, keeping the balance of inner retinal retinoid pools.

  19. Simulator study of the effect of visual-motion time delays on pilot tracking performance with an audio side task

    NASA Technical Reports Server (NTRS)

    Riley, D. R.; Miller, G. K., Jr.

    1978-01-01

    The effect of time delay was determined in the visual and motion cues in a flight simulator on pilot performance in tracking a target aircraft that was oscillating sinusoidally in altitude only. An audio side task was used to assure the subject was fully occupied at all times. The results indicate that, within the test grid employed, about the same acceptable time delay (250 msec) was obtained for a single aircraft (fighter type) by each of two subjects for both fixed-base and motion-base conditions. Acceptable time delay is defined as the largest amount of delay that can be inserted simultaneously into the visual and motion cues before performance degradation occurs. A statistical analysis of the data was made to establish this value of time delay. Audio side task provided quantitative data that documented the subject's work level.

  20. Infantile nystagmus adapts to visual demand.

    PubMed

    Wiggins, Debbie; Woodhouse, J Margaret; Margrain, Tom H; Harris, Christopher M; Erichsen, Jonathan T

    2007-05-01

    To determine the effect of visual demand on the nystagmus waveform. Individuals with infantile nystagmus syndrome (INS) commonly report that making an effort to see can intensify their nystagmus and adversely affect vision. However, such an effect has never been confirmed experimentally. The eye movement behavior of 11 subjects with INS were recorded at different gaze angles while the subjects viewed visual targets under two conditions: above and then at resolution threshold. Eye movements were recorded by infrared oculography and visual acuity (VA) was measured using Landolt C targets and a two-alternative, forced-choice (2AFC) staircase procedure. Eye movement data were analyzed at the null zone for changes in amplitude, frequency, intensity, and foveation characteristics. Waveform type was also noted under the two conditions. Data from 11 subjects revealed a significant reduction in nystagmus amplitude (P < 0.05), frequency (P < 0.05), and intensity (P < 0.01) when target size was at visual threshold. The percentage of time the eye spent within the low-velocity window (i.e., foveation) significantly increased when target size was at visual threshold (P < 0.05). Furthermore, a change in waveform type with increased visual demand was exhibited by two subjects. The results indicate that increased visual demand modifies the nystagmus waveform favorably (and possibly adaptively), producing a significant reduction in nystagmus intensity and prolonged foveation. These findings contradict previous anecdotal reports that visual effort intensifies the nystagmus eye movement at the cost of visual performance. This discrepancy may be attributable to the lack of psychological stress involved in the visual task reported here. This is consistent with the suggestion that it is the visual importance of the task to the individual rather than visual demand per se which exacerbates INS. Further studies are needed to investigate quantitatively the effects of stress and psychological

  1. Are forward and backward recall the same? A dual-task study of digit recall.

    PubMed

    St Clair-Thompson, Helen L; Allen, Richard J

    2013-05-01

    There is some debate surrounding the cognitive resources underlying backward digit recall. Some researchers consider it to differ from forward digit recall due to the involvement of executive control, while others suggest that backward recall involves visuospatial resources. Five experiments therefore investigated the role of executive-attentional and visuospatial resources in both forward and backward digit recall. In the first, participants completed visuospatial 0-back and 2-back tasks during the encoding of information to be remembered. The concurrent tasks did not differentially disrupt performance on backward digit recall, relative to forward digit recall. Experiment 2 shifted concurrent load to the recall phase instead and, in this case, revealed a larger effect of both tasks on backward recall, relative to forwards recall, suggesting that backward recall may draw on additional resources during the recall phase and that these resources are visuospatial in nature. Experiments 3 and 4 then further investigated the role of visual processes in forward and backward recall using dynamic visual noise (DVN). In Experiment 3, DVN was presented during encoding of information to be remembered and had no effect upon performance. However, in Experiment 4, it was presented during the recall phase, and the results provided evidence of a role for visual imagery in backward digit recall. These results were replicated in Experiment 5, in which the same list length was used for forward and backward recall tasks. The findings are discussed in terms of both theoretical and practical implications.

  2. Impaired visual recognition of biological motion in schizophrenia.

    PubMed

    Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee

    2005-09-15

    Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.

  3. Overlapping neural circuits for visual attention and eye movements in the human cerebellum.

    PubMed

    Striemer, Christopher L; Chouinard, Philippe A; Goodale, Melvyn A; de Ribaupierre, Sandrine

    2015-03-01

    Previous research in patients with cerebellar damage suggests that the cerebellum plays a role in covert visual attention. One limitation of some of these studies is that they examined patients with heterogeneous cerebellar damage. As a result, the patterns of reported deficits have been inconsistent. In the current study, we used functional neuroimaging (fMRI) in healthy adults (N=14) to examine whether or not the cerebellum plays a role in covert visual attention. Participants performed two covert attention tasks in which they were cued exogenously (with peripheral flashes) or endogenously (using directional arrows) to attend to marked locations in the visual periphery without moving their eyes. We compared BOLD activation in these covert attention conditions to a number of control conditions including: the same attention tasks with eye movements, a target detection task with no cueing, and a self-paced button-press task. Subtracting these control conditions from the covert attention conditions allowed us to effectively remove the contribution of the cerebellum to motor output. In addition to the usual fronto-parietal networks commonly engaged by these attention tasks, lobule VI of the vermis in the cerebellum was also activated when participants performed the covert attention tasks with or without eye movements. Interestingly, this effect was larger for exogenous compared to endogenous cueing. These results, in concert with recent patient studies, provide independent yet converging evidence that the same cerebellar structures that are involved in eye movements are also involved in visuospatial attention. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The shaping of information by visual metaphors.

    PubMed

    Ziemkiewicz, Caroline; Kosara, Robert

    2008-01-01

    The nature of an information visualization can be considered to lie in the visual metaphors it uses to structure information. The process of understanding a visualization therefore involves an interaction between these external visual metaphors and the user's internal knowledge representations. To investigate this claim, we conducted an experiment to test the effects of visual metaphor and verbal metaphor on the understanding of tree visualizations. Participants answered simple data comprehension questions while viewing either a treemap or a node-link diagram. Questions were worded to reflect a verbal metaphor that was either compatible or incompatible with the visualization a participant was using. The results (based on correctness and response time) suggest that the visual metaphor indeed affects how a user derives information from a visualization. Additionally, we found that the degree to which a user is affected by the metaphor is strongly correlated with the user's ability to answer task questions correctly. These findings are a first step towards illuminating how visual metaphors shape user understanding, and have significant implications for the evaluation, application, and theory of visualization.

  5. Inhibition in movement plan competition: reach trajectories curve away from remembered and task-irrelevant present but not from task-irrelevant past visual stimuli.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2017-11-01

    The current study investigated the role of automatic encoding and maintenance of remembered, past, and present visual distractors for reach movement planning. The previous research on eye movements showed that saccades curve away from locations actively kept in working memory and also from task-irrelevant perceptually present visual distractors, but not from task-irrelevant past distractors. Curvature away has been associated with an inhibitory mechanism resolving the competition between multiple active movement plans. Here, we examined whether reach movements underlie a similar inhibitory mechanism and thus show systematic modulation of reach trajectories when the location of a previously presented distractor has to be (a) maintained in working memory or (b) ignored, or (c) when the distractor is perceptually present. Participants performed vertical reach movements on a computer monitor from a home to a target location. Distractors appeared laterally and near or far from the target (equidistant from central fixation). We found that reaches curved away from the distractors located close to the target when the distractor location had to be memorized and when it was perceptually present, but not when the past distractor had to be ignored. Our findings suggest that automatically encoding present distractors and actively maintaining the location of past distractors in working memory evoke a similar response competition resolved by inhibition, as has been previously shown for saccadic eye movements.

  6. Amplitude modulation of steady-state visual evoked potentials by event-related potentials in a working memory task

    PubMed Central

    Yao, Dezhong; Tang, Yu; Huang, Yilan; Su, Sheng

    2009-01-01

    Previous studies have shown that the amplitude and phase of the steady-state visual-evoked potential (SSVEP) can be influenced by a cognitive task, yet the mechanism of this influence has not been understood. As the event-related potential (ERP) is the direct neural electric response to a cognitive task, studying the relationship between the SSVEP and ERP would be meaningful in understanding this underlying mechanism. In this work, the traditional average method was applied to extract the ERP directly, following the stimulus of a working memory task, while a technique named steady-state probe topography was utilized to estimate the SSVEP under the simultaneous stimulus of an 8.3-Hz flicker and a working memory task; a comparison between the ERP and SSVEP was completed. The results show that the ERP can modulate the SSVEP amplitude, and for regions where both SSVEP and ERP are strong, the modulation depth is large. PMID:19960240

  7. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  8. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  9. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  10. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  11. The role of lightness, hue and saturation in feature-based visual attention.

    PubMed

    Stuart, Geoffrey W; Barsdell, Wendy N; Day, Ross H

    2014-03-01

    Visual attention is used to select part of the visual array for higher-level processing. Visual selection can be based on spatial location, but it has also been demonstrated that multiple locations can be selected simultaneously on the basis of a visual feature such as color. One task that has been used to demonstrate feature-based attention is the judgement of the symmetry of simple four-color displays. In a typical task, when symmetry is violated, four squares on either side of the display do not match. When four colors are involved, symmetry judgements are made more quickly than when only two of the four colors are involved. This indicates that symmetry judgements are made one color at a time. Previous studies have confounded lightness, hue, and saturation when defining the colors used in such displays. In three experiments, symmetry was defined by lightness alone, lightness plus hue, or by hue or saturation alone, with lightness levels randomised. The difference between judgements of two- and four-color asymmetry was maintained, showing that hue and saturation can provide the sole basis for feature-based attentional selection. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  12. Vision and visual navigation in nocturnal insects.

    PubMed

    Warrant, Eric; Dacke, Marie

    2011-01-01

    With their highly sensitive visual systems, nocturnal insects have evolved a remarkable capacity to discriminate colors, orient themselves using faint celestial cues, fly unimpeded through a complicated habitat, and navigate to and from a nest using learned visual landmarks. Even though the compound eyes of nocturnal insects are significantly more sensitive to light than those of their closely related diurnal relatives, their photoreceptors absorb photons at very low rates in dim light, even during demanding nocturnal visual tasks. To explain this apparent paradox, it is hypothesized that the necessary bridge between retinal signaling and visual behavior is a neural strategy of spatial and temporal summation at a higher level in the visual system. Exactly where in the visual system this summation takes place, and the nature of the neural circuitry that is involved, is currently unknown but provides a promising avenue for future research.

  13. Direct and indirect effects of attention and visual function on gait impairment in Parkinson's disease: influence of task and turning.

    PubMed

    Stuart, Samuel; Galna, Brook; Delicato, Louise S; Lord, Sue; Rochester, Lynn

    2017-07-01

    Gait impairment is a core feature of Parkinson's disease (PD) which has been linked to cognitive and visual deficits, but interactions between these features are poorly understood. Monitoring saccades allows investigation of real-time cognitive and visual processes and their impact on gait when walking. This study explored: (i) saccade frequency when walking under different attentional manipulations of turning and dual-task; and (ii) direct and indirect relationships between saccades, gait impairment, vision and attention. Saccade frequency (number of fast eye movements per-second) was measured during gait in 60 PD and 40 age-matched control participants using a mobile eye-tracker. Saccade frequency was significantly reduced in PD compared to controls during all conditions. However, saccade frequency increased with a turn and decreased under dual-task for both groups. Poorer attention directly related to saccade frequency, visual function and gait impairment in PD, but not controls. Saccade frequency did not directly relate to gait in PD, but did in controls. Instead, saccade frequency and visual function deficit indirectly impacted gait impairment in PD, which was underpinned by their relationship with attention. In conclusion, our results suggest a vital role for attention with direct and indirect influences on gait impairment in PD. Attention directly impacted saccade frequency, visual function and gait impairment in PD, with connotations for falls. It also underpinned indirect impact of visual and saccadic impairment on gait. Attention therefore represents a key therapeutic target that should be considered in future research. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.

    PubMed

    Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D

    2011-10-30

    Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Anatomical Coupling between Distinct Metacognitive Systems for Memory and Visual Perception

    PubMed Central

    McCurdy, Li Yan; Maniscalco, Brian; Metcalfe, Janet; Liu, Ka Yuet; de Lange, Floris P.; Lau, Hakwan

    2015-01-01

    A recent study found that, across individuals, gray matter volume in the frontal polar region was correlated with visual metacognition capacity (i.e., how well one’s confidence ratings distinguish between correct and incorrect judgments). A question arises as to whether the putative metacognitive mechanisms in this region are also used in other metacognitive tasks involving, for example, memory. A novel psychophysical measure allowed us to assess metacognitive efficiency separately in a visual and a memory task, while taking variations in basic task performance capacity into account. We found that, across individuals, metacognitive efficiencies positively correlated between the two tasks. However, voxel-based morphometry analysis revealed distinct brain structures for the two kinds of metacognition. Replicating a previous finding, variation in visual metacognitive efficiency was correlated with volume of frontal polar regions. However, variation in memory metacognitive efficiency was correlated with volume of the precuneus. There was also a weak correlation between visual metacognitive efficiency and precuneus volume, which may account for the behavioral correlation between visual and memory metacognition (i.e., the precuneus may contain common mechanisms for both types of metacognition). However, we also found that gray matter volumes of the frontal polar and precuneus regions themselves correlated across individuals, and a formal model comparison analysis suggested that this structural covariation was sufficient to account for the behavioral correlation of metacognition in the two tasks. These results highlight the importance of the precuneus in higher-order memory processing and suggest that there may be functionally distinct metacognitive systems in the human brain. PMID:23365229

  16. Efficient estimation of ideal-observer performance in classification tasks involving high-dimensional complex backgrounds

    PubMed Central

    Park, Subok; Clarkson, Eric

    2010-01-01

    The Bayesian ideal observer is optimal among all observers and sets an absolute upper bound for the performance of any observer in classification tasks [Van Trees, Detection, Estimation, and Modulation Theory, Part I (Academic, 1968).]. Therefore, the ideal observer should be used for objective image quality assessment whenever possible. However, computation of ideal-observer performance is difficult in practice because this observer requires the full description of unknown, statistical properties of high-dimensional, complex data arising in real life problems. Previously, Markov-chain Monte Carlo (MCMC) methods were developed by Kupinski et al. [J. Opt. Soc. Am. A 20, 430(2003) ] and by Park et al. [J. Opt. Soc. Am. A 24, B136 (2007) and IEEE Trans. Med. Imaging 28, 657 (2009) ] to estimate the performance of the ideal observer and the channelized ideal observer (CIO), respectively, in classification tasks involving non-Gaussian random backgrounds. However, both algorithms had the disadvantage of long computation times. We propose a fast MCMC for real-time estimation of the likelihood ratio for the CIO. Our simulation results show that our method has the potential to speed up ideal-observer performance in tasks involving complex data when efficient channels are used for the CIO. PMID:19884916

  17. The Relationship between Task-Induced Involvement Load and Learning New Words from Context

    ERIC Educational Resources Information Center

    Nassaji, Hossein; Hu, Hsueh-chao Marcella

    2012-01-01

    This study investigated the relationship between task-induced involvement load and ESL learners' inferencing and learning word meanings from context. Thirty-two ESL learners were randomly assigned to one of three groups, with each group receiving a different version of a text that was assumed to differ from one another in terms of the degree of…

  18. Differential engagement of attention and visual working memory in the representation and evaluation of the number of relevant targets and their spatial relations: Evidence from the N2pc and SPCN.

    PubMed

    Maheux, Manon; Jolicœur, Pierre

    2017-04-01

    We examined the role of attention and visual working memory in the evaluation of the number of target stimuli as well as their relative spatial position using the N2pc and the SPCN. Participants performed two tasks: a simple counting task in which they had to determine if a visual display contained one or two coloured items among grey fillers and one in which they had to identify a specific relation between two coloured items. The same stimuli were used for both tasks. Each task was designed to permit an easier evaluation of either the same-coloured or differently-coloured stimuli. We predicted a greater involvement of attention and visual working memory for more difficult stimulus-task pairings. The results confirmed these predictions and suggest that visuospatial configurations that require more time to evaluate induce a greater (and presumably longer) involvement of attention and visual working memory. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Visual Learning Alters the Spontaneous Activity of the Resting Human Brain: An fNIRS Study

    PubMed Central

    Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan

    2014-01-01

    Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning. PMID:25243168

  20. Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.

    PubMed

    Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan

    2014-01-01

    Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.

  1. Cerebral Correlates of Emotional and Action Appraisals During Visual Processing of Emotional Scenes Depending on Spatial Frequency: A Pilot Study.

    PubMed

    Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole

    2016-01-01

    Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task's demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal

  2. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  3. An Empirical Study on Using Visual Embellishments in Visualization.

    PubMed

    Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min

    2012-12-01

    In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.

  4. "So What if My Students Misbehave?" Addressing Misbehavior in a Task-Involving Motivational Climate

    ERIC Educational Resources Information Center

    Model, Eric D.; Todorovich, John R.; Largo-Wight, Erin

    2005-01-01

    This article describes factors that teachers can use to create a task-involving motivational climate, discusses behavioral practices for increasing student compliance, and provides specific recommendations for addressing behavior concerns in the physical education setting. A good teaching philosophy built upon established principles is the best…

  5. Workflows and individual differences during visually guided routine tasks in a road traffic management control room.

    PubMed

    Starke, Sandra D; Baber, Chris; Cooke, Neil J; Howes, Andrew

    2017-05-01

    Road traffic control rooms rely on human operators to monitor and interact with information presented on multiple displays. Past studies have found inconsistent use of available visual information sources in such settings across different domains. In this study, we aimed to broaden the understanding of observer behaviour in control rooms by analysing a case study in road traffic control. We conducted a field study in a live road traffic control room where five operators responded to incidents while wearing a mobile eye tracker. Using qualitative and quantitative approaches, we investigated the operators' workflow using ergonomics methods and quantified visual information sampling. We found that individuals showed differing preferences for viewing modalities and weighting of task components, with a strong coupling between eye and head movement. For the quantitative analysis of the eye tracking data, we propose a number of metrics which may prove useful to compare visual sampling behaviour across domains in future. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Visual memory and sustained attention impairment in youths with autism spectrum disorders.

    PubMed

    Chien, Y-L; Gau, S S-F; Shang, C-Y; Chiu, Y-N; Tsai, W-C; Wu, Y-Y

    2015-08-01

    An uneven neurocognitive profile is a hallmark of autism spectrum disorder (ASD). Studies focusing on the visual memory performance in ASD have shown controversial results. We investigated visual memory and sustained attention in youths with ASD and typically developing (TD) youths. We recruited 143 pairs of youths with ASD (males 93.7%; mean age 13.1, s.d. 3.5 years) and age- and sex-matched TD youths. The ASD group consisted of 67 youths with autistic disorder (autism) and 76 with Asperger's disorder (AS) based on the DSM-IV criteria. They were assessed using the Cambridge Neuropsychological Test Automated Battery involving the visual memory [spatial recognition memory (SRM), delayed matching to sample (DMS), paired associates learning (PAL)] and sustained attention (rapid visual information processing; RVP). Youths with ASD performed significantly worse than TD youths on most of the tasks; the significance disappeared in the superior intelligence quotient (IQ) subgroup. The response latency on the tasks did not differ between the ASD and TD groups. Age had significant main effects on SRM, DMS, RVP and part of PAL tasks and had an interaction with diagnosis in DMS and RVP performance. There was no significant difference between autism and AS on visual tasks. Our findings implied that youths with ASD had a wide range of visual memory and sustained attention impairment that was moderated by age and IQ, which supports temporal and frontal lobe dysfunction in ASD. The lack of difference between autism and AS implies that visual memory and sustained attention cannot distinguish these two ASD subtypes, which supports DSM-5 ASD criteria.

  7. Socio-cognitive profiles for visual learning in young and older adults

    PubMed Central

    Christian, Julie; Goldstone, Aimee; Kuai, Shu-Guang; Chin, Wynne; Abrams, Dominic; Kourtzi, Zoe

    2015-01-01

    It is common wisdom that practice makes perfect; but why do some adults learn better than others? Here, we investigate individuals’ cognitive and social profiles to test which variables account for variability in learning ability across the lifespan. In particular, we focused on visual learning using tasks that test the ability to inhibit distractors and select task-relevant features. We tested the ability of young and older adults to improve through training in the discrimination of visual global forms embedded in a cluttered background. Further, we used a battery of cognitive tasks and psycho-social measures to examine which of these variables predict training-induced improvement in perceptual tasks and may account for individual variability in learning ability. Using partial least squares regression modeling, we show that visual learning is influenced by cognitive (i.e., cognitive inhibition, attention) and social (strategic and deep learning) factors rather than an individual’s age alone. Further, our results show that independent of age, strong learners rely on cognitive factors such as attention, while weaker learners use more general cognitive strategies. Our findings suggest an important role for higher-cognitive circuits involving executive functions that contribute to our ability to improve in perceptual tasks after training across the lifespan. PMID:26113820

  8. Visual search in a forced-choice paradigm

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

  9. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.

  10. The cognitive science of visual-spatial displays: implications for design.

    PubMed

    Hegarty, Mary

    2011-07-01

    This paper reviews cognitive science perspectives on the design of visual-spatial displays and introduces the other papers in this topic. It begins by classifying different types of visual-spatial displays, followed by a discussion of ways in which visual-spatial displays augment cognition and an overview of the perceptual and cognitive processes involved in using displays. The paper then argues for the importance of cognitive science methods to the design of visual displays and reviews some of the main principles of display design that have emerged from these approaches to date. Cognitive scientists have had good success in characterizing the performance of well-defined tasks with relatively simple visual displays, but many challenges remain in understanding the use of complex displays for ill-defined tasks. Current research exemplified by the papers in this topic extends empirical approaches to new displays and domains, informs the development of general principles of graphic design, and addresses current challenges in display design raised by the recent explosion in availability of complex data sets and new technologies for visualizing and interacting with these data. Copyright © 2011 Cognitive Science Society, Inc.

  11. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing

  12. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  13. Visual cognition in amnesic H.M.: selective deficits on the What's-Wrong-Here and Hidden-Figure tasks.

    PubMed

    MacKay, Donald G; James, Lori E

    2009-10-01

    Two experiments compared the visual cognition performance of amnesic H.M. and memory-normal controls matched for age, background, intelligence, and education. In Experiment 1 H.M. exhibited deficits relative to the controls in detecting "erroneous objects" in complex visual scenes--for example, a bird flying inside a fishbowl. In Experiment 2 H.M. exhibited deficits relative to the controls in standard Hidden-Figure tasks when detecting unfamiliar targets but not when detecting familiar targets--for example, circles, squares, and right-angle triangles. H.M.'s visual cognition deficits were not due to his well-known problems in explicit learning and recall, inability to comprehend or remember the instructions, general slowness, motoric difficulties, low motivation, low IQ relative to the controls, or working-memory limitations. Parallels between H.M.'s selective deficits in visual cognition, language, and memory are discussed. These parallels contradict the standard "systems theory" account of H.M.'s condition but comport with the hypothesis that H.M. has difficulty representing unfamiliar but not familiar information in visual cognition, language, and memory. Implications of our results are discussed for binding theory and the ongoing debate over what counts as "memory" versus "not-memory."

  14. Neural Correlates of a Perspective-taking Task Using in a Realistic Three-dimmensional Environment Based Task: A Pilot Functional Magnetic Resonance Imaging Study.

    PubMed

    Agarwal, Sri Mahavir; Shivakumar, Venkataram; Kalmady, Sunil V; Danivas, Vijay; Amaresha, Anekal C; Bose, Anushree; Narayanaswamy, Janardhanan C; Amorim, Michel-Ange; Venkatasubramanian, Ganesan

    2017-08-31

    Perspective-taking ability is an essential spatial faculty that is of much interest in both health and neuropsychiatric disorders. There is limited data on the neural correlates of perspective taking in the context of a realistic three-dimensional environment. We report the results of a pilot study exploring the same in eight healthy volunteers. Subjects underwent two runs of an experiment in a 3 Tesla magnetic resonance imaging (MRI) involving alternate blocks of a first-person perspective based allocentric object location memory task (OLMT), a third-person perspective based egocentric visual perspective taking task (VPRT), and a table task (TT) that served as a control. Difference in blood oxygen level dependant response during task performance was analyzed using Statistical Parametric Mapping software, version 12. Activations were considered significant if they survived family-wise error correction at the cluster level using a height threshold of p <0.001, uncorrected at the voxel level. A significant difference in accuracy and reaction time based on task type was found. Subjects had significantly lower accuracy in VPRT compared to TT. Accuracy in the two active tasks was not significantly different. Subjects took significantly longer in the VPRT in comparison to TT. Reaction time in the two active tasks was not significantly different. Functional MRI revealed significantly higher activation in the bilateral visual cortex and left temporoparietal junction (TPJ) in VPRT compared to OLMT. The results underscore the importance of TPJ in egocentric manipulation in healthy controls in the context of reality-based spatial tasks.

  15. A magnetoencephalography study of visual processing of pain anticipation.

    PubMed

    Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C

    2014-07-15

    Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.

  16. Understanding and Visualizing Multitasking and Task Switching Activities: A Time Motion Study to Capture Nursing Workflow

    PubMed Central

    Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L.; Migliore, Elaina M.; Chipps, Esther M.; Buck, Jacalyn

    2016-01-01

    A fundamental understanding of multitasking within nursing workflow is important in today’s dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives. PMID:28269924

  17. Understanding and Visualizing Multitasking and Task Switching Activities: A Time Motion Study to Capture Nursing Workflow.

    PubMed

    Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L; Migliore, Elaina M; Chipps, Esther M; Buck, Jacalyn

    2016-01-01

    A fundamental understanding of multitasking within nursing workflow is important in today's dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives.

  18. Written object naming, spelling to dictation, and immediate copying: Different tasks, different pathways?

    PubMed

    Bonin, Patrick; Méot, Alain; Lagarrigue, Aurélie; Roux, Sébastien

    2015-01-01

    We report an investigation of cross-task comparisons of handwritten latencies in written object naming, spelling to dictation, and immediate copying. In three separate sessions, adults had to write down a list of concrete nouns from their corresponding pictures (written naming), from their spoken (spelling to dictation) and from their visual presentation (immediate copying). Linear mixed models without random slopes were performed on the latencies in order to study and compare within-task fixed effects. By-participants random slopes were then included to investigate individual differences within and across tasks. Overall, the findings suggest that written naming, spelling to dictation, and copying all involve a lexical pathway, but that written naming relies on this pathway more than the other two tasks do. Only spelling to dictation strongly involves a nonlexical pathway. Finally, the analyses performed at the level of participants indicate that, depending on the type of task, the slower participants are more or less influenced by certain psycholinguistic variables.

  19. The role of visual and spatial working memory in forming mental models derived from survey and route descriptions.

    PubMed

    Meneghetti, Chiara; Labate, Enia; Pazzaglia, Francesca; Hamilton, Colin; Gyselinck, Valérie

    2017-05-01

    This study examines the involvement of spatial and visual working memory (WM) in the construction of flexible spatial models derived from survey and route descriptions. Sixty young adults listened to environment descriptions, 30 from a survey perspective and the other 30 from a route perspective, while they performed spatial (spatial tapping [ST]) and visual (dynamic visual noise [DVN]) secondary tasks - believed to overload the spatial and visual working memory (WM) components, respectively - or no secondary task (control, C). Their mental representations of the environment were tested by free recall and a verification test with both route and survey statements. Results showed that, for both recall tasks, accuracy was worse in the ST than in the C or DVN conditions. In the verification test, the effect of both ST and DVN was a decreasing accuracy for sentences testing spatial relations from the opposite perspective to the one learnt than if the perspective was the same; only ST had a stronger interference effect than the C condition for sentences from the opposite perspective from the one learnt. Overall, these findings indicate that both visual and spatial WM, and especially the latter, are involved in the construction of perspective-flexible spatial models. © 2016 The British Psychological Society.

  20. The sensory strength of voluntary visual imagery predicts visual working memory capacity.

    PubMed

    Keogh, Rebecca; Pearson, Joel

    2014-10-09

    How much we can actively hold in mind is severely limited and differs greatly from one person to the next. Why some individuals have greater capacities than others is largely unknown. Here, we investigated why such large variations in visual working memory (VWM) capacity might occur, by examining the relationship between visual working memory and visual mental imagery. To assess visual working memory capacity participants were required to remember the orientation of a number of Gabor patches and make subsequent judgments about relative changes in orientation. The sensory strength of voluntary imagery was measured using a previously documented binocular rivalry paradigm. Participants with greater imagery strength also had greater visual working memory capacity. However, they were no better on a verbal number working memory task. Introducing a uniform luminous background during the retention interval of the visual working memory task reduced memory capacity, but only for those with strong imagery. Likewise, for the good imagers increasing background luminance during imagery generation reduced its effect on subsequent binocular rivalry. Luminance increases did not affect any of the subgroups on the verbal number working memory task. Together, these results suggest that luminance was disrupting sensory mechanisms common to both visual working memory and imagery, and not a general working memory system. The disruptive selectivity of background luminance suggests that good imagers, unlike moderate or poor imagers, may use imagery as a mnemonic strategy to perform the visual working memory task. © 2014 ARVO.

  1. Infant visual attention and object recognition.

    PubMed

    Reynolds, Greg D

    2015-05-15

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Infant Visual Attention and Object Recognition

    PubMed Central

    Reynolds, Greg D.

    2015-01-01

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. PMID:25596333

  3. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  4. Dynamic visual noise reduces confidence in short-term memory for visual information.

    PubMed

    Kemps, Eva; Andrade, Jackie

    2012-05-01

    Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

  5. Making Sense of Education: Sensory Ethnography and Visual Impairment

    ERIC Educational Resources Information Center

    Morris, Ceri

    2017-01-01

    Education involves the engagement of the full range of the senses in the accomplishment of tasks and the learning of knowledge and skills. However both in pedagogical practices and in the process of educational research, there has been a tendency to privilege the visual. To explore these issues, detailed sensory ethnographic fieldwork was…

  6. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Age-related slowing of response selection and production in a visual choice reaction time task

    PubMed Central

    Woods, David L.; Wyma, John M.; Yund, E. William; Herron, Timothy J.; Reed, Bruce

    2015-01-01

    Aging is associated with delayed processing in choice reaction time (CRT) tasks, but the processing stages most impacted by aging have not been clearly identified. Here, we analyzed CRT latencies in a computerized serial visual feature-conjunction task. Participants responded to a target letter (probability 40%) by pressing one mouse button, and responded to distractor letters differing either in color, shape, or both features from the target (probabilities 20% each) by pressing the other mouse button. Stimuli were presented randomly to the left and right visual fields and stimulus onset asynchronies (SOAs) were adaptively reduced following correct responses using a staircase procedure. In Experiment 1, we tested 1466 participants who ranged in age from 18 to 65 years. CRT latencies increased significantly with age (r = 0.47, 2.80 ms/year). Central processing time (CPT), isolated by subtracting simple reaction times (SRT) (obtained in a companion experiment performed on the same day) from CRT latencies, accounted for more than 80% of age-related CRT slowing, with most of the remaining increase in latency due to slowed motor responses. Participants were faster and more accurate when the stimulus location was spatially compatible with the mouse button used for responding, and this effect increased slightly with age. Participants took longer to respond to distractors with target color or shape than to distractors with no target features. However, the additional time needed to discriminate the more target-like distractors did not increase with age. In Experiment 2, we replicated the findings of Experiment 1 in a second population of 178 participants (ages 18–82 years). CRT latencies did not differ significantly in the two experiments, and similar effects of age, distractor similarity, and stimulus-response spatial compatibility were found. The results suggest that the age-related slowing in visual CRT latencies is largely due to delays in response selection and

  8. The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task

    PubMed Central

    Roverud, Elin; Streeter, Timothy; Mason, Christine R.; Kidd, Gerald

    2017-01-01

    The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question–answer pairs that were embedded in a mixture of competing conversations. The participant’s task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a “dynamic” condition in which the target stimulus moved between three locations, and a “fixed” condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate. PMID:28758567

  9. Two subdivisions of macaque LIP process visual-oculomotor information differently.

    PubMed

    Chen, Mo; Li, Bing; Guang, Jing; Wei, Linyu; Wu, Si; Liu, Yu; Zhang, Mingsha

    2016-10-11

    Although the cerebral cortex is thought to be composed of functionally distinct areas, the actual parcellation of area and assignment of function are still highly controversial. An example is the much-studied lateral intraparietal cortex (LIP). Despite the general agreement that LIP plays an important role in visual-oculomotor transformation, it remains unclear whether the area is primary sensory- or motor-related (the attention-intention debate). Although LIP has been considered as a functionally unitary area, its dorsal (LIPd) and ventral (LIPv) parts differ in local morphology and long-distance connectivity. In particular, LIPv has much stronger connections with two oculomotor centers, the frontal eye field and the deep layers of the superior colliculus, than does LIPd. Such anatomical distinctions imply that compared with LIPd, LIPv might be more involved in oculomotor processing. We tested this hypothesis physiologically with a memory saccade task and a gap saccade task. We found that LIP neurons with persistent memory activities in memory saccade are primarily provoked either by visual stimulation (vision-related) or by both visual and saccadic events (vision-saccade-related) in gap saccade. The distribution changes from predominantly vision-related to predominantly vision-saccade-related as the recording depth increases along the dorsal-ventral dimension. Consistently, the simultaneously recorded local field potential also changes from visual evoked to saccade evoked. Finally, local injection of muscimol (GABA agonist) in LIPv, but not in LIPd, dramatically decreases the proportion of express saccades. With these results, we conclude that LIPd and LIPv are more involved in visual and visual-saccadic processing, respectively.

  10. Enhancing links between visual short term memory, visual attention and cognitive control processes through practice: An electrophysiological insight.

    PubMed

    Fuggetta, Giorgio; Duke, Philip A

    2017-05-01

    The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second

  11. Task activation and functional connectivity show concordant memory laterality in temporal lobe epilepsy.

    PubMed

    Sideman, Noah; Chaitanya, Ganne; He, Xiaosong; Doucet, Gaelle; Kim, Na Young; Sperling, Michael R; Sharan, Ashwini D; Tracy, Joseph I

    2018-04-01

    In epilepsy, asymmetries in the organization of mesial temporal lobe (MTL) functions help determine the cognitive risk associated with procedures such as anterior temporal lobectomy. Past studies have investigated the change/shift in a visual episodic memory laterality index (LI) in mesial temporal lobe structures through functional magnetic resonance imaging (fMRI) task activations. Here, we examine whether underlying task-related functional connectivity (FC) is concordant with such standard fMRI laterality measures. A total of 56 patients with temporal lobe epilepsy (TLE) (Left TLE [LTLE]: 31; Right TLE [RTLE]: 25) and 34 matched healthy controls (HC) underwent fMRI scanning during performance of a scene encoding task (SET). We assessed an activation-based LI of the hippocampal gyrus (HG) and parahippocampal gyrus (PHG) during the SET and its correspondence with task-related FC measures. Analyses involving the HG and PHG showed that the patients with LTLE had a consistently higher LI (right-lateralized) than that of the HC and group with RTLE, indicating functional reorganization. The patients with RTLE did not display a reliable contralateral shift away from the pathology, with the mesial structures showing quite distinct laterality patterns (HG, no laterality bias; PHG, no evidence of LI shift). The FC data for the group with LTLE provided confirmation of reorganization effects, revealing that a rightward task LI may be based on underlying connections between several left-sided regions (middle/superior occipital and left medial frontal gyri) and the right PHG. The FCs between the right HG and left anterior cingulate/medial frontal gyri were also observed in LTLE. Importantly, the data demonstrate that the areas involved in the LTLE task activation shift to the right hemisphere showed a corresponding increase in task-related FCs between the hemispheres. Altered laterality patterns based on mesial temporal lobe epilepsy (MTLE) pathology manifest as several

  12. The effect of visual salience on memory-based choices.

    PubMed

    Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J

    2014-02-01

    Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.

  13. Representational neglect for words as revealed by bisection tasks.

    PubMed

    Arduino, Lisa S; Marinelli, Chiara Valeria; Pasotti, Fabrizio; Ferrè, Elisa Raffaella; Bottini, Gabriella

    2012-03-01

    In the present study, we showed that a representational disorder for words can dissociate from both representational neglect for objects and neglect dyslexia. This study involved 14 brain-damaged patients with left unilateral spatial neglect and a group of normal subjects. Patients were divided into four groups based on presence of left neglect dyslexia and representational neglect for non-verbal material, as evaluated by the Clock Drawing test. The patients were presented with bisection tasks for words and lines. The word bisection tasks (with words of five and seven letters) comprised the following: (1) representational bisection: the experimenter pronounced a word and then asked the patient to name the letter in the middle position; (2) visual bisection: same as (1) with stimuli presented visually; and (3) motor bisection: the patient was asked to cross out the letter in the middle position. The standard line bisection task was presented using lines of different length. Consistent with the literature, long lines were bisected to the right and short lines, rendered comparable in length to the words of the word bisection test, deviated to the left (crossover effect). Both patients and controls showed the same leftward bias on words in the visual and motor bisection conditions. A significant difference emerged between the groups only in the case of the representational bisection task, whereas the group exhibiting neglect dyslexia associated with representational neglect for objects showed a significant rightward bias, while the other three patient groups and the controls showed a leftward bisection bias. Neither the presence of neglect alone nor the presence of visual neglect dyslexia was sufficient to produce a specific disorder in mental imagery. These results demonstrate a specific representational neglect for words independent of both representational neglect and neglect dyslexia. ©2011 The British Psychological Society.

  14. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing

    PubMed Central

    Zhao, Jing; Kwok, Rosa K. W.; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2017-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These

  15. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing.

    PubMed

    Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2016-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These

  16. Cerebral Correlates of Emotional and Action Appraisals During Visual Processing of Emotional Scenes Depending on Spatial Frequency: A Pilot Study

    PubMed Central

    Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole

    2016-01-01

    Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task’s demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal

  17. The Effects of Task Clarification, Visual Prompts, and Graphic Feedback on Customer Greeting and Up-Selling in a Restaurant

    ERIC Educational Resources Information Center

    Squires, James; Wilder, David A.; Fixsen, Amanda; Hess, Erica; Rost, Kristen; Curran, Ryan; Zonneveld, Kimberly

    2007-01-01

    An intervention consisting of task clarification, visual prompts, and graphic feedback was evaluated to increase customer greeting and up-selling in a restaurant. A combination multiple baseline and reversal design was used to evaluate intervention effects. Although all interventions improved performance over baseline, the delivery of graphic…

  18. A Closer Look at Visual Manuals.

    ERIC Educational Resources Information Center

    van der Meij, Hans

    1996-01-01

    Examines the visual manual genre, discussing main forms and functions of step-by-step and guided tour manuals in detail. Examines whether a visual manual helps computer users realize tasks faster and more accurately than a non-visual manual. Finds no effects on accuracy, but speedier task execution by 35% for visual manuals. Concludes there is no…

  19. Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions

    PubMed Central

    Morgan, Helen M.; Jackson, Margaret C.; van Koningsbruggen, Martijn G.; Shapiro, Kimron L.; Linden, David E.J.

    2013-01-01

    In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. PMID:22483548

  20. Specialization in the default mode: Task-induced brain deactivations dissociate between visual working memory and attention.

    PubMed

    Mayer, Jutta S; Roebroeck, Alard; Maurer, Konrad; Linden, David E J

    2010-01-01

    The idea of an organized mode of brain function that is present as default state and suspended during goal-directed behaviors has recently gained much interest in the study of human brain function. The default mode hypothesis is based on the repeated observation that certain brain areas show task-induced deactivations across a wide range of cognitive tasks. In this event-related functional resonance imaging study we tested the default mode hypothesis by comparing common and selective patterns of BOLD deactivation in response to the demands on visual attention and working memory (WM) that were independently modulated within one task. The results revealed task-induced deactivations within regions of the default mode network (DMN) with a segregation of areas that were additively deactivated by an increase in the demands on both attention and WM, and areas that were selectively deactivated by either high attentional demand or WM load. Attention-selective deactivations appeared in the left ventrolateral and medial prefrontal cortex and the left lateral temporal cortex. Conversely, WM-selective deactivations were found predominantly in the right hemisphere including the medial-parietal, the lateral temporo-parietal, and the medial prefrontal cortex. Moreover, during WM encoding deactivated regions showed task-specific functional connectivity. These findings demonstrate that task-induced deactivations within parts of the DMN depend on the specific characteristics of the attention and WM components of the task. The DMN can thus be subdivided into a set of brain regions that deactivate indiscriminately in response to cognitive demand ("the core DMN") and a part whose deactivation depends on the specific task. 2009 Wiley-Liss, Inc.

  1. Your visual system provides all the information you need to make moral judgments about generic visual events.

    PubMed

    De Freitas, Julian; Alvarez, George A

    2018-05-28

    To what extent are people's moral judgments susceptible to subtle factors of which they are unaware? Here we show that we can change people's moral judgments outside of their awareness by subtly biasing perceived causality. Specifically, we used subtle visual manipulations to create visual illusions of causality in morally relevant scenarios, and this systematically changed people's moral judgments. After demonstrating the basic effect using simple displays involving an ambiguous car collision that ends up injuring a person (E1), we show that the effect is sensitive on the millisecond timescale to manipulations of task-irrelevant factors that are known to affect perceived causality, including the duration (E2a) and asynchrony (E2b) of specific task-irrelevant contextual factors in the display. We then conceptually replicate the effect using a different paradigm (E3a), and also show that we can eliminate the effect by interfering with motion processing (E3b). Finally, we show that the effect generalizes across different kinds of moral judgments (E3c). Combined, these studies show that obligatory, abstract inferences made by the visual system influence moral judgments. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Visual body perception in anorexia nervosa.

    PubMed

    Urgesi, Cosimo; Fornasari, Livia; Perini, Laura; Canalaz, Francesca; Cremaschi, Silvana; Faleschini, Laura; Balestrieri, Matteo; Fabbro, Franco; Aglioti, Salvatore Maria; Brambilla, Paolo

    2012-05-01

    Disturbance of body perception is a central aspect of anorexia nervosa (AN) and several neuroimaging studies have documented structural and functional alterations of occipito-temporal cortices involved in visual body processing. However, it is unclear whether these perceptual deficits involve more basic aspects of others' body perception. A consecutive sample of 15 adolescent patients with AN were compared with a group of 15 age- and gender-matched controls in delayed matching to sample tasks requiring the visual discrimination of the form or of the action of others' body. Patients showed better visual discrimination performance than controls in detail-based processing of body forms but not of body actions, which positively correlated with their increased tendency to convert a signal of punishment into a signal of reinforcement (higher persistence scores). The paradoxical advantage of patients with AN in detail-based body processing may be associated to their tendency to routinely explore body parts as a consequence of their obsessive worries about body appearance. Copyright © 2012 Wiley Periodicals, Inc.

  3. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    PubMed

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  4. The impact of attentional, linguistic, and visual features during object naming

    PubMed Central

    Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank

    2013-01-01

    Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792

  5. Attentional Predictors of 5-month-olds' Performance on a Looking A-not-B Task.

    PubMed

    Marcovitch, Stuart; Clearfield, Melissa W; Swingler, Margaret; Calkins, Susan D; Bell, Martha Ann

    2016-01-01

    In the first year of life, the ability to search for hidden objects is an indicator of object permanence and, when multiple locations are involved, executive function (i.e. inhibition, cognitive flexibility and working memory). The current study was designed to examine attentional predictors of search in 5-month-old infants (as measured by the looking A-not-B task), and whether levels of maternal education moderated the effect of the predictors. Specifically, in a separate task, the infants were shown a unique puppet, and we measured the percentage of time attending to the puppet, as well as the length of the longest look (i.e., peak fixation) directed towards the puppet. Across the entire sample ( N =390), the percentage of time attending to the puppet was positively related to performance on the visual A-not-B task. However, for infants whose mothers had not completed college, having a shorter peak looking time (after controlling for percentage of time) was also a predictor of visual A-not-B performance. The role of attention, peak fixation and maternal education in visual search is discussed.

  6. Brain Regions Involved in the Learning and Application of Reward Rules in a Two-Deck Gambling Task

    ERIC Educational Resources Information Center

    Hartstra, E.; Oldenburg, J. F. E.; Van Leijenhorst, L.; Rombouts, S. A. R. B.; Crone, E. A.

    2010-01-01

    Decision-making involves the ability to choose between competing actions that are associated with uncertain benefits and penalties. The Iowa Gambling Task (IGT), which mimics real-life decision-making, involves learning a reward-punishment rule over multiple trials. Patients with damage to ventromedial prefrontal cortex (VMPFC) show deficits…

  7. Functional double dissociation within the entorhinal cortex for visual scene-dependent choice behavior

    PubMed Central

    Yoo, Seung-Woo; Lee, Inah

    2017-01-01

    How visual scene memory is processed differentially by the upstream structures of the hippocampus is largely unknown. We sought to dissociate functionally the lateral and medial subdivisions of the entorhinal cortex (LEC and MEC, respectively) in visual scene-dependent tasks by temporarily inactivating the LEC and MEC in the same rat. When the rat made spatial choices in a T-maze using visual scenes displayed on LCD screens, the inactivation of the MEC but not the LEC produced severe deficits in performance. However, when the task required the animal to push a jar or to dig in the sand in the jar using the same scene stimuli, the LEC but not the MEC became important. Our findings suggest that the entorhinal cortex is critical for scene-dependent mnemonic behavior, and the response modality may interact with a sensory modality to determine the involvement of the LEC and MEC in scene-based memory tasks. DOI: http://dx.doi.org/10.7554/eLife.21543.001 PMID:28169828

  8. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  9. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  10. Song and speech: brain regions involved with perception and covert production.

    PubMed

    Callan, Daniel E; Tsytsarev, Vassiliy; Hanakawa, Takashi; Callan, Akiko M; Katsuhara, Maya; Fukuyama, Hidenao; Turner, Robert

    2006-07-01

    This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of

  11. Testing visual short-term memory of pigeons (Columba livia) and a rhesus monkey (Macaca mulatta) with a location change detection task.

    PubMed

    Leising, Kenneth J; Elmore, L Caitlin; Rivera, Jacquelyne J; Magnotti, John F; Katz, Jeffrey S; Wright, Anthony A

    2013-09-01

    Change detection is commonly used to assess capacity (number of objects) of human visual short-term memory (VSTM). Comparisons with the performance of non-human animals completing similar tasks have shown similarities and differences in object-based VSTM, which is only one aspect ("what") of memory. Another important aspect of memory, which has received less attention, is spatial short-term memory for "where" an object is in space. In this article, we show for the first time that a monkey and pigeons can be accurately trained to identify location changes, much as humans do, in change detection tasks similar to those used to test object capacity of VSTM. The subject's task was to identify (touch/peck) an item that changed location across a brief delay. Both the monkey and pigeons showed transfer to delays longer than the training delay, to greater and smaller distance changes than in training, and to novel colors. These results are the first to demonstrate location-change detection in any non-human species and encourage comparative investigations into the nature of spatial and visual short-term memory.

  12. Callosal involvement in a lateralized stroop task in alcoholic and healthy subjects.

    PubMed

    Schulte, T; Müller-Oehring, E M; Salo, R; Pfefferbaum, A; Sullivan, E V

    2006-11-01

    To investigate the role of interhemispheric attentional processes, 25 alcoholic and 28 control subjects were tested with a Stroop match-to-sample task and callosal areas were measured with magnetic resonance imaging. Stroop color-word stimuli were presented to the left or right visual field (VF) and were preceded by a color cue that did or did not match the word's color. For matching colors, both groups showed a right VF advantage; for nonmatching colors, controls showed a left VF advantage, whereas alcoholic subjects showed no VF advantage. For nonmatch trials, VF advantage correlated with callosal splenium area in controls but not alcoholic subjects, supporting the position that information presented to the nonpreferred hemisphere is transmitted via the splenium to the hemisphere specialized for efficient processing. The authors speculate that alcoholism-associated callosal thinning disrupts this processing route.

  13. Effect of Tai Chi Training on Dual-Tasking Performance That Involves Stepping Down among Stroke Survivors: A Pilot Study.

    PubMed

    Chan, Wing-Nga; Tsang, William Wai-Nam

    2017-01-01

    Descending stairs demands attention and neuromuscular control, especially with dual-tasking. Studies have demonstrated that stroke often degrades a survivor's ability to descend stairs. Tai Chi has been shown to improve dual-tasking performance of healthy older adults, but no such study has been conducted in stroke survivors. This study investigated the effect of Tai Chi training on dual-tasking performance that involved stepping down and compared it with that of conventional exercise among stroke survivors. Subjects were randomized into Tai Chi ( n = 9), conventional exercise ( n = 8), and control ( n = 9) groups. Those in the former two groups received 12-week training. Assessments included auditory Stroop test, stepping down test, and dual-tasking test involving both simultaneously. They were evaluated before training (time-1), after training (time-2), and one month after training (time-3). Tai Chi group showed significant improvement in the auditory Stroop test from time-1 to time-3 and the performance was significantly better than that of the conventional exercise group in time-3. No significant effect was found in the stepping down task or dual-tasking in the control group. These results suggest a beneficial effect of Tai Chi training on cognition among stroke survivors without compromising physical task performance in dual-tasking. The effect was better than the conventional exercise group. Nevertheless, further research with a larger sample is warranted.

  14. Opposite brain laterality in analogous auditory and visual tests.

    PubMed

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  15. Lateralization of spatial rather than temporal attention underlies the left hemifield advantage in rapid serial visual presentation.

    PubMed

    Asanowicz, Dariusz; Kruse, Lena; Śmigasiewicz, Kamila; Verleger, Rolf

    2017-11-01

    In bilateral rapid serial visual presentation (RSVP), the second of two targets, T1 and T2, is better identified in the left visual field (LVF) than in the right visual field (RVF). This LVF advantage may reflect hemispheric asymmetry in temporal attention or/and in spatial orienting of attention. Participants performed two tasks: the "standard" bilateral RSVP task (Exp.1) and its unilateral variant (Exp.1 & 2). In the bilateral task, spatial location was uncertain, thus target identification involved stimulus-driven spatial orienting. In the unilateral task, the targets were presented block-wise in the LVF or RVF only, such that no spatial orienting was needed for target identification. Temporal attention was manipulated in both tasks by varying the T1-T2 lag. The results showed that the LVF advantage disappeared when involvement of stimulus-driven spatial orienting was eliminated, whereas the manipulation of temporal attention had no effect on the asymmetry. In conclusion, the results do not support the hypothesis of hemispheric asymmetry in temporal attention, and provide further evidence that the LVF advantage reflects right hemisphere predominance in stimulus-driven orienting of spatial attention. These conclusions fit evidence that temporal attention is implemented by bilateral parietal areas and spatial attention by the right-lateralized ventral frontoparietal network. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Visual scanning behavior and pilot workload

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.; Tole, J. R.; Stephens, A. T.; Ephrath, A. R.

    1981-01-01

    An experimental paradigm and a set of results which demonstrate a relationship between the level of performance on a skilled man-machine control task, the skill of the operator, the level of mental difficulty induced by an additional task imposed on the basic control task, and visual scanning performance. During a constant, simulated piloting task, visual scanning of instruments was found to vary as a function of the level of difficulty of a verbal mental loading task. The average dwell time of each fixation on the pilot's primary instrument increased as a function of the estimated skill level of the pilots, with novices being affected by the loading task much more than the experts. The results suggest that visual scanning of instruments in a controlled task may be an indicator of both workload and skill.

  17. Visual scanning behavior and pilot workload

    NASA Technical Reports Server (NTRS)

    Harris, R. L., Sr.; Tole, J. R.; Stephens, A. T.; Ephrath, A. R.

    1982-01-01

    This paper describes an experimental paradigm and a set of results which demonstrate a relationship between the level of performance on a skilled man-machine control task, the skill of the operator, the level of mental difficulty induced by an additional task imposed on the basic control task, and visual scanning performance. During a constant, simulated piloting task, visual scanning of instruments was found to vary with the difficulty of a verbal mental loading task. The average dwell time of each fixation on the pilot's primary instrument increased with the estimated skill level of the pilots, with novices being affected by the loading task much more than experts. The results suggest that visual scanning of instruments in a controlled task may be an indicator of both workload and skill.

  18. Feature-based memory-driven attentional capture: visual working memory content affects visual attention.

    PubMed

    Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan

    2006-10-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.

  19. Insensitivity of visual short-term memory to irrelevant visual information.

    PubMed

    Andrade, Jackie; Kemps, Eva; Werniers, Yves; May, Jon; Szmalec, Arnaud

    2002-07-01

    Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment I replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.

  20. Impact of Learning Styles on Air Force Technical Training: Multiple and Linear Imagery in the Presentation of a Comparative Visual Location Task to Visual and Haptic Subjects. Interim Report for Period January 1977-January 1978.

    ERIC Educational Resources Information Center

    Ausburn, Floyd B.

    A U.S. Air Force study was designed to develop instruction based on the supplantation theory, in which tasks are performed (supplanted) for individuals who are unable to perform them due to their cognitive style. The study examined the effects of linear and multiple imagery in presenting a task requiring visual comparison and location to…

  1. Working Memory in Wayfinding--A Dual Task Experiment in a Virtual City

    ERIC Educational Resources Information Center

    Meilinger, Tobias; Knauff, Markus; Bulthoff, Heinrich H.

    2008-01-01

    This study examines the working memory systems involved in human wayfinding. In the learning phase, 24 participants learned two routes in a novel photorealistic virtual environment displayed on a 220 degrees screen while they were disrupted by a visual, a spatial, a verbal, or--in a control group--no secondary task. In the following wayfinding…

  2. [Visual perception of Japanese characters and complicated figures: developmental changes of visual P300 event-related potentials].

    PubMed

    Sata, Yoshimi; Inagaki, Masumi; Shirane, Seiko; Kaga, Makiko

    2002-07-01

    In order to evaluate developmental change of visual perception, the P300 event-related potentials (ERPs) of visual oddball task were recorded in 34 healthy volunteers ranging from 7 to 37 years of age. The latency and amplitude of visual P300 in response to the Japanese ideogram stimuli (a pair of familiar Kanji characters or unfamiliar Kanji characters) and a pair of meaningless complicated figures were measured. Visual P300 was dominant at parietal area in almost all subjects. There was a significant difference of P300 latency among the three tasks. Reaction time to the both kind of Kanji tasks were significantly shorter than those to the complicated figure task. P300 latencies to the familiar Kanji, unfamiliar Kanji and figure stimuli decreased until 25.8, 26.9 and 29.4 years of age, respectively, and regression analysis revealed that a positive quadratic function could be fitted to the data. Around 9 years of age, the P300 latency/age slope was largest in the unfamiliar Kanji task. These findings suggest that visual P300 development depends on both the complexity of the tasks and specificity of the stimuli, which might reflect the variety in visual information processing.

  3. The development of organized visual search

    PubMed Central

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  4. Visual affective classification by combining visual and text features.

    PubMed

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.

  5. Visual affective classification by combining visual and text features

    PubMed Central

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566

  6. Developing Tests of Visual Dependency

    NASA Technical Reports Server (NTRS)

    Kindrat, Alexandra N.

    2011-01-01

    Astronauts develop neural adaptive responses to microgravity during space flight. Consequently these adaptive responses cause maladaptive disturbances in balance and gait function when astronauts return to Earth and are re-exposed to gravity. Current research in the Neuroscience Laboratories at NASA-JSC is focused on understanding how exposure to space flight produces post-flight disturbances in balance and gait control and developing training programs designed to facilitate the rapid recovery of functional mobility after space flight. In concert with these disturbances, astronauts also often report an increase in their visual dependency during space flight. To better understand this phenomenon, studies were conducted with specially designed training programs focusing on visual dependency with the aim to understand and enhance subjects ability to rapidly adapt to novel sensory situations. The Rod and Frame test (RFT) was used first to assess an individual s visual dependency, using a variety of testing techniques. Once assessed, subjects were asked to perform two novel tasks under transformation (both the Pegboard and Cube Construction tasks). Results indicate that head position cues and initial visual test conditions had no effect on an individual s visual dependency scores. Subjects were also able to adapt to the manual tasks after several trials. Individual visual dependency correlated with ability to adapt manual to a novel visual distortion only for the cube task. Subjects with higher visual dependency showed decreased ability to adapt to this task. Ultimately, it was revealed that the RFT may serve as an effective prediction tool to produce individualized adaptability training prescriptions that target the specific sensory profile of each crewmember.

  7. Simple control-theoretic models of human steering activity in visually guided vehicle control

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1991-01-01

    A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.

  8. Multi-modal information processing for visual workload relief

    NASA Technical Reports Server (NTRS)

    Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.

    1980-01-01

    The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.

  9. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot

  10. Gender differences in brain activation on a mental rotation task.

    PubMed

    Semrud-Clikeman, Margaret; Fine, Jodene Goldenring; Bledsoe, Jesse; Zhu, David C

    2012-10-01

    Few neuroimaging studies have explored gender differences on mental rotation tasks. Most studies have utilized samples with both genders, samples mainly consisting of men, or samples with six or fewer females. Graduate students in science fields or liberal arts programs (20 males, 20 females) completed a mental rotation task during functional magnetic resonance imaging (fMRI). When a pair of cube figures was shown, the participant made a keypad response based on whether the pair is the same/similar or different. Regardless of gender, the bilateral middle frontal gyrus, bilateral intraparietal sulcus (IPS), and the left precuneus were activated when a subject tried to solve the mental rotation task. Increased activation in the right inferior frontal gyrus/middle frontal gyrus, the left precuneus/posterior cingulate cortex/cuneus region, and the left middle occipital gyrus was found for men as compared to women. Better accuracy and shorter response times were correlated with an increased activation in the bilateral intraparietal sulcus. No significant brain activity differences related to mental rotation were found between academic majors. These findings suggest that networks involved in visual attention appear to be more strongly activated in the mental rotation tasks in men as compared to women. It also suggests that men use a more automatic process when analyzing complex visual reasoning tasks while women use a more top-down process.

  11. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB

    PubMed Central

    Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven

    2013-01-01

    Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363

  12. Salient sounds activate human visual cortex automatically.

    PubMed

    McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A

    2013-05-22

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.

  13. Differential effects of visual attention and working memory on binocular rivalry.

    PubMed

    Scocchia, Lisa; Valsecchi, Matteo; Gegenfurtner, Karl R; Triesch, Jochen

    2014-05-30

    The investigation of cognitive influence on binocular rivalry has a long history. However, the effects of visual WM on rivalry have never been studied so far. We examined top-down modulation of rivalry perception in four experiments to compare the effects of visual WM and sustained selective attention: In the first three experiments we failed to observe any sustained effect of the WM content; only the color of the memory probe was found to prime the initially dominant percept. In Experiment 4 we found a clear effect of sustained attention on rivalry both in terms of the first dominant percept and of the overall dominance when participants were involved in a tracking task. Our results provide an example of dissociation between visual WM and selective attention, two phenomena which otherwise functionally overlap to a large extent. Furthermore, our study highlights the importance of the task employed to engage cognitive resources: The observed perceptual epiphenomena of binocular rivalry are indicative of visual competition at an early stage, which is not affected by WM but is still susceptible to attention influence as long as the observer’s attention is constrained to one of the two rival images via a specific concomitant task. © 2014 ARVO.

  14. Signals in inferotemporal and perirhinal cortex suggest an “untangling” of visual target information

    PubMed Central

    Pagan, Marino; Urban, Luke S.; Wohl, Margot P.; Rust, Nicole C.

    2013-01-01

    Finding sought visual targets requires our brains to flexibly combine working memory information about what we are looking for with visual information about what we are looking at. To investigate the neural computations involved in finding visual targets, we recorded neural responses in inferotemporal (IT) and perirhinal (PRH) cortex as macaque monkeys performed a task that required them to find targets within sequences of distractors. We found similar amounts of total task-specific information in both areas, however, information about whether a target was in view was more accessible using a linear read-out (i.e. was more “untangled”) in PRH. Consistent with the flow of information from IT to PRH, we also found that task-relevant information arrived earlier in IT. PRH responses were well-described by a functional model in which “untangling” computations in PRH reformat input from IT by combining neurons with asymmetric tuning correlations for target matches and distractors. PMID:23792943

  15. An Accumulation-of-Evidence Task Using Visual Pulses for Mice Navigating in Virtual Reality

    PubMed Central

    Pinto, Lucas; Koay, Sue A.; Engelhard, Ben; Yoon, Alice M.; Deverett, Ben; Thiberge, Stephan Y.; Witten, Ilana B.; Tank, David W.; Brody, Carlos D.

    2018-01-01

    The gradual accumulation of sensory evidence is a crucial component of perceptual decision making, but its neural mechanisms are still poorly understood. Given the wide availability of genetic and optical tools for mice, they can be useful model organisms for the study of these phenomena; however, behavioral tools are largely lacking. Here, we describe a new evidence-accumulation task for head-fixed mice navigating in a virtual reality (VR) environment. As they navigate down the stem of a virtual T-maze, they see brief pulses of visual evidence on either side, and retrieve a reward on the arm with the highest number of pulses. The pulses occur randomly with Poisson statistics, yielding a diverse yet well-controlled stimulus set, making the data conducive to a variety of computational approaches. A large number of mice of different genotypes were able to learn and consistently perform the task, at levels similar to rats in analogous tasks. They are sensitive to side differences of a single pulse, and their memory of the cues is stable over time. Moreover, using non-parametric as well as modeling approaches, we show that the mice indeed accumulate evidence: they use multiple pulses of evidence from throughout the cue region of the maze to make their decision, albeit with a small overweighting of earlier cues, and their performance is affected by the magnitude but not the duration of evidence. Additionally, analysis of the mice's running patterns revealed that trajectories are fairly stereotyped yet modulated by the amount of sensory evidence, suggesting that the navigational component of this task may provide a continuous readout correlated to the underlying cognitive variables. Our task, which can be readily integrated with state-of-the-art techniques, is thus a valuable tool to study the circuit mechanisms and dynamics underlying perceptual decision making, particularly under more complex behavioral contexts. PMID:29559900

  16. An evaluation of the effects of high visual taskload on the separate behaviors involved in complex monitoring performance.

    DOT National Transportation Integrated Search

    1988-01-01

    Operational monitoring situations, in contrast to typical laboratory vigilance tasks, generally involve more than just stimulus detection and recognition. They frequently involve complex multidimensional discriminations, interpretations of significan...

  17. Supporting Teachers in Structuring Mathematics Lessons Involving Challenging Tasks

    ERIC Educational Resources Information Center

    Sullivan, Peter; Askew, Mike; Cheeseman, Jill; Clarke, Doug; Mornane, Angela; Roche, Anne; Walker, Nadia

    2015-01-01

    The following is a report on an investigation into ways of supporting teachers in converting challenging mathematics tasks into classroom lessons and supporting students in engaging with those tasks. Groups of primary and secondary teachers, respectively, were provided with documentation of ten lessons built around challenging tasks. Teachers…

  18. High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL

    PubMed Central

    Stone, John E.; Messmer, Peter; Sisneros, Robert; Schulten, Klaus

    2016-01-01

    Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications. PMID:27747137

  19. High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL.

    PubMed

    Stone, John E; Messmer, Peter; Sisneros, Robert; Schulten, Klaus

    2016-05-01

    Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications.

  20. Why Do Deaf Participants Have a Lower Performance than Hearing Participants in a Visual Rhyming Task: A Phonological Hypothesis

    ERIC Educational Resources Information Center

    Aparicio, Mario; Demont, Elisabeth; Metz-Lutz, Marie-Noëlle; Leybaert, J.; Alegria, Jesús

    2014-01-01

    During a visual rhyming task, deaf participants traditionally perform more poorly than hearing participants in making rhyme judgements for written words in which the rhyme and the spelling pattern are incongruent (e.g. "hair/bear"). It has been suggested that deaf participants' low accuracy results from their tendency to rely on…