NASA Astrophysics Data System (ADS)
Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo
2009-04-01
In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.
Pomplun, M; Reingold, E M; Shen, J
2001-09-01
In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.
The development of organized visual search
Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.
2013-01-01
Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560
Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2011-10-01
The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Visual search in a forced-choice paradigm
NASA Technical Reports Server (NTRS)
Holmgren, J. E.
1974-01-01
The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.
Influence of social presence on eye movements in visual search tasks.
Liu, Na; Yu, Ruifeng
2017-12-01
This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.
Dementia alters standing postural adaptation during a visual search task in older adult men.
Jor'dan, Azizah J; McCarten, J Riley; Rottunda, Susan; Stoffregen, Thomas A; Manor, Brad; Wade, Michael G
2015-04-23
This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance--in the non-dementia group only--suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus, appears to disrupt this perception-action synergy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Chung, Kevin K. H.
2015-01-01
This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…
Reimer, Christina B; Schubert, Torsten
2017-09-15
Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.
Advanced Video Activity Analytics (AVAA): Human Factors Evaluation
2015-05-01
video, and 3) creating and saving annotations (Fig. 11). (The logging program was updated after the pilot to also capture search clicks.) Playing and... visual search task and the auditory task together and thus automatically focused on the visual task. Alternatively, the operator may have intentionally...affect performance on the primary task; however, in the current test there was no apparent effect on the operator’s performance in the visual search task
ERIC Educational Resources Information Center
Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.
2005-01-01
Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…
Dong, Guangheng; Yang, Lizhu; Shen, Yue
2009-08-21
The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.
Visual Search Elicits the Electrophysiological Marker of Visual Working Memory
Emrich, Stephen M.; Al-Aidroos, Naseem; Pratt, Jay; Ferber, Susanne
2009-01-01
Background Although limited in capacity, visual working memory (VWM) plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA), which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. Methodology/Principal Findings The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. Conclusions/Significance We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors. PMID:19956663
Perceptual learning in visual search: fast, enduring, but non-specific.
Sireteanu, R; Rettenbach, R
1995-07-01
Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.
Braun, J
1994-02-01
In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
Kunar, Melina A; Ariyabandu, Surani; Jami, Zaffran
2016-04-01
The efficiency of how people search for an item in visual search has, traditionally, been thought to depend on bottom-up or top-down guidance cues. However, recent research has shown that the rate at which people visually search through a display is also affected by cognitive strategies. In this study, we investigated the role of choice in visual search, by asking whether giving people a choice alters both preference for a cognitively neutral task and search behavior. Two visual search conditions were examined: one in which participants were given a choice of visual search task (the choice condition), and one in which participants did not have a choice (the no-choice condition). The results showed that the participants in the choice condition rated the task as both more enjoyable and likeable than did the participants in the no-choice condition. However, despite their preferences, actual search performance was slower and less efficient in the choice condition than in the no-choice condition (Exp. 1). Experiment 2 showed that the difference in search performance between the choice and no-choice conditions disappeared when central executive processes became occupied with a task-switching task. These data concur with a choice-impaired hypothesis of search, in which having a choice leads to more motivated, active search involving executive processes.
Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing
ERIC Educational Resources Information Center
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-01-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…
Visual selective attention and reading efficiency are related in children.
Casco, C; Tressoldi, P E; Dellantonio, A
1998-09-01
We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.
The effects of task difficulty on visual search strategy in virtual 3D displays.
Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa
2013-08-28
Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.
Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)
ERIC Educational Resources Information Center
Hollingworth, Andrew
2012-01-01
Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…
Horizontal visual search in a large field by patients with unilateral spatial neglect.
Nakatani, Ken; Notoya, Masako; Sunahara, Nobuyuki; Takahashi, Shusuke; Inoue, Katsumi
2013-06-01
In this study, we investigated the horizontal visual search ability and pattern of horizontal visual search in a large space performed by patients with unilateral spatial neglect (USN). Subjects included nine patients with right hemisphere damage caused by cerebrovascular disease showing left USN, nine patients with right hemisphere damage but no USN, and six healthy individuals with no history of brain damage who were age-matched to the groups with brain right hemisphere damage. The number of visual search tasks accomplished was recorded in the first experiment. Neck rotation angle was continuously measured during the task and quantitative data of the measurements were collected. There was a strong correlation between the number of visual search tasks accomplished and the total Behavioral Inattention Test Conventional Subtest (BITC) score in subjects with right hemisphere damage. In both USN and control groups, the head position during the visual search task showed a balanced bell-shaped distribution from the central point on the field to the left and right sides. Our results indicate that compensatory strategies, including cervical rotation, may improve visual search capability and achieve balance on the neglected side. Copyright © 2012 Elsevier Ltd. All rights reserved.
A novel computational model to probe visual search deficits during motor performance
Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy
2016-01-01
Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2013-02-01
A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.
Aging and feature search: the effect of search area.
Burton-Danner, K; Owsley, C; Jackson, G R
2001-01-01
The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.
Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César
2015-10-01
Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.
Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-09-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.
Lundqvist, Daniel; Bruce, Neil; Öhman, Arne
2015-01-01
In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.
Eye Movements Reveal How Task Difficulty Moulds Visual Search
ERIC Educational Resources Information Center
Young, Angela H.; Hulleman, Johan
2013-01-01
In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…
Global Statistical Learning in a Visual Search Task
ERIC Educational Resources Information Center
Jones, John L.; Kaschak, Michael P.
2012-01-01
Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…
Choosing colors for map display icons using models of visual search.
Shive, Joshua; Francis, Gregory
2013-04-01
We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.
The effects of task difficulty on visual search strategy in virtual 3D displays
Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa
2013-01-01
Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539
Visual selective attention in amnestic mild cognitive impairment.
McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E
2014-11-01
Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Visual Search Performance in Patients with Vision Impairment: A Systematic Review.
Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva
2017-11-01
Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.
The involvement of central attention in visual search is determined by task demands.
Han, Suk Won
2017-04-01
Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.
ERIC Educational Resources Information Center
Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace
2008-01-01
Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…
Alvarez, George A.; Nakayama, Ken; Konkle, Talia
2016-01-01
Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing. PMID:27832600
Experimental system for measurement of radiologists' performance by visual search task.
Maeda, Eriko; Yoshikawa, Takeharu; Nakashima, Ryoichi; Kobayashi, Kazufumi; Yokosawa, Kazuhiko; Hayashi, Naoto; Masutani, Yoshitaka; Yoshioka, Naoki; Akahane, Masaaki; Ohtomo, Kuni
2013-01-01
Detective performance of radiologists for "obvious" targets should be evaluated by visual search task instead of ROC analysis, but visual task have not been applied to radiology studies. The aim of this study was to set up an environment that allows visual search task in radiology, to evaluate its feasibility, and to preliminarily investigate the effect of career on the performance. In a darkroom, ten radiologists were asked to answer the type of lesion by pressing buttons, when images without lesions, with bulla, ground-glass nodule, and solid nodule were randomly presented on a display. Differences in accuracy and reaction times depending on board certification were investigated. The visual search task was successfully and feasibly performed. Radiologists were found to have high sensitivity, specificity, positive predictive values and negative predictive values in non-board and board groups. Reaction time was under 1 second for all target types in both groups. Board radiologists were significantly faster in answering for bulla, but there were no significant differences for other targets and values. We developed an experimental system that allows visual search experiment in radiology. Reaction time for detection of bulla was shortened with experience.
Parallel Processing in Visual Search Asymmetry
ERIC Educational Resources Information Center
Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin
2004-01-01
The difficulty of visual search may depend on assignment of the same visual elements as targets and distractors-search asymmetry. Easy C-in-O searches and difficult O-in-C searches are often associated with parallel and serial search, respectively. Here, the time course of visual search was measured for both tasks with speed-accuracy methods. The…
Changes in search rate but not in the dynamics of exogenous attention in action videogame players.
Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne
2011-11-01
Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.
Frontal–Occipital Connectivity During Visual Search
Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas
2012-01-01
Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993
ERIC Educational Resources Information Center
Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.
2010-01-01
The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…
Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia
ERIC Educational Resources Information Center
Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray
2012-01-01
The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…
Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter
ERIC Educational Resources Information Center
Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory
2010-01-01
Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…
McCrea, Simon M.; Robinson, Thomas P.
2011-01-01
In this study, five consecutive patients with focal strokes and/or cortical excisions were examined with the Wechsler Adult Intelligence Scale and Wechsler Memory Scale—Fourth Editions along with a comprehensive battery of other neuropsychological tasks. All five of the lesions were large and typically involved frontal, temporal, and/or parietal lobes and were lateralized to one hemisphere. The clinical case method was used to determine the cognitive neuropsychological correlates of mental rotation (Visual Puzzles), Piagetian balance beam (Figure Weights), and visual search (Cancellation) tasks. The pattern of results on Visual Puzzles and Figure Weights suggested that both subtests involve predominately right frontoparietal networks involved in visual working memory. It appeared that Visual Puzzles could also critically rely on the integrity of the left temporoparietal junction. The left temporoparietal junction could be involved in temporal ordering and integration of local elements into a nonverbal gestalt. In contrast, the Figure Weights task appears to critically involve the right temporoparietal junction involved in numerical magnitude estimation. Cancellation was sensitive to left frontotemporal lesions and not right posterior parietal lesions typical of other visual search tasks. In addition, the Cancellation subtest was sensitive to verbal search strategies and perhaps object-based attention demands, thereby constituting a unique task in comparison with previous visual search tasks. PMID:22389807
Does constraining memory maintenance reduce visual search efficiency?
Buttaccio, Daniel R; Lange, Nicholas D; Thomas, Rick P; Dougherty, Michael R
2018-03-01
We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.
Visual search deficits in amblyopia.
Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F
2018-04-01
Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.
van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J
2017-08-01
Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.
Lindor, Ebony; Rinehart, Nicole; Fielding, Joanne
2018-05-22
Individuals with Autism Spectrum Disorder (ASD) often excel on visual search and crowding tasks; however, inconsistent findings suggest that this 'islet of ability' may not be characteristic of the entire spectrum. We examined whether performance on these tasks changed as a function of motor proficiency in children with varying levels of ASD symptomology. Children with high ASD symptomology outperformed all others on complex visual search tasks, but only if their motor skills were rated at, or above, age expectations. For the visual crowding task, children with high ASD symptomology and superior motor skills exhibited enhanced target discrimination, whereas those with high ASD symptomology but poor motor skills experienced deficits. These findings may resolve some of the discrepancies in the literature.
MacLean, Mary H; Giesbrecht, Barry
2015-07-01
Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.
Accurate expectancies diminish perceptual distraction during visual search
Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry
2014-01-01
The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374
The role of extra-foveal processing in 3D imaging
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.
2017-03-01
The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).
Casual Video Games as Training Tools for Attentional Processes in Everyday Life.
Stroud, Michael J; Whitbourne, Susan Krauss
2015-11-01
Three experiments examined the attentional components of the popular match-3 casual video game, Bejeweled Blitz (BJB). Attentionally demanding, BJB is highly popular among adults, particularly those in middle and later adulthood. In experiment 1, 54 older adults (Mage = 70.57) and 33 younger adults (Mage = 19.82) played 20 rounds of BJB, and completed online tasks measuring reaction time, simple visual search, and conjunction visual search. Prior experience significantly predicted BJB scores for younger adults, but for older adults, both prior experience and simple visual search task scores predicted BJB performance. Experiment 2 tested whether BJB practice alone would result in a carryover benefit to a visual search task in a sample of 58 young adults (Mage = 19.57) who completed 0, 10, or 30 rounds of BJB followed by a BJB-like visual search task with targets present or absent. Reaction times were significantly faster for participants who completed 30 but not 10 rounds of BJB compared with the search task only. This benefit was evident when targets were both present and absent, suggesting that playing BJB improves not only target detection, but also the ability to quit search effectively. Experiment 3 tested whether the attentional benefit in experiment 2 would apply to non-BJB stimuli. The results revealed a similar numerical but not significant trend. Taken together, the findings suggest there are benefits of casual video game playing to attention and relevant everyday skills, and that these games may have potential value as training tools.
Comparing visual search and eye movements in bilinguals and monolinguals
Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.
2017-01-01
Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116
Huang, Liqiang
2015-05-01
Basic visual features (e.g., color, orientation) are assumed to be processed in the same general way across different visual tasks. Here, a significant deviation from this assumption was predicted on the basis of the analysis of stimulus spatial structure, as characterized by the Boolean-map notion. If a task requires memorizing the orientations of a set of bars, then the map consisting of those bars can be readily used to hold the overall structure in memory and will thus be especially useful. If the task requires visual search for a target, then the map, which contains only an overall structure, will be of little use. Supporting these predictions, the present study demonstrated that in comparison to stimulus colors, bar orientations were processed more efficiently in change-detection tasks but less efficiently in visual search tasks (Cohen's d = 4.24). In addition to offering support for the role of the Boolean map in conscious access, the present work also throws doubts on the generality of processing visual features. © The Author(s) 2015.
Task specificity of attention training: the case of probability cuing
Jiang, Yuhong V.; Swallow, Khena M.; Won, Bo-Yeong; Cistera, Julia D.; Rosenbaum, Gail M.
2014-01-01
Statistical regularities in our environment enhance perception and modulate the allocation of spatial attention. Surprisingly little is known about how learning-induced changes in spatial attention transfer across tasks. In this study, we investigated whether a spatial attentional bias learned in one task transfers to another. Most of the experiments began with a training phase in which a search target was more likely to be located in one quadrant of the screen than in the other quadrants. An attentional bias toward the high-probability quadrant developed during training (probability cuing). In a subsequent, testing phase, the target's location distribution became random. In addition, the training and testing phases were based on different tasks. Probability cuing did not transfer between visual search and a foraging-like task. However, it did transfer between various types of visual search tasks that differed in stimuli and difficulty. These data suggest that different visual search tasks share a common and transferrable learned attentional bias. However, this bias is not shared by high-level, decision-making tasks such as foraging. PMID:25113853
The effect of spectral filters on visual search in stroke patients.
Beasley, Ian G; Davies, Leon N
2013-01-01
Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.
Chen, Fu-Chen; Chen, Hsin-Lin; Tu, Jui-Hung; Tsai, Chia-Liang
2015-09-01
People often multi-task in their daily life. However, the mechanisms for the interaction between simultaneous postural and non-postural tasks have been controversial over the years. The present study investigated the effects of light digital touch on both postural sway and visual search accuracy for the purpose of assessing two hypotheses (functional integration and resource competition), which may explain the interaction between postural sway and the performance of a non-postural task. Participants (n=42, 20 male and 22 female) were asked to inspect a blank sheet of paper or visually search for target letters in a text block while a fingertip was in light contact with a stable surface (light touch, LT), or with both arms hanging at the sides of the body (no touch, NT). The results showed significant main effects of LT on reducing the magnitude of postural sway as well as enhancing visual search accuracy compared with the NT condition. The findings support the hypothesis of function integration, demonstrating that the modulation of postural sway can be modulated to improve the performance of a visual search task. Copyright © 2015 Elsevier B.V. All rights reserved.
Bucci, Maria Pia; Nassibi, Naziha; Gerard, Christophe-Loic; Bui-Quoc, Emmanuel; Seassau, Magali
2012-01-01
Studies comparing binocular eye movements during reading and visual search in dyslexic children are, at our knowledge, inexistent. In the present study we examined ocular motor characteristics in dyslexic children versus two groups of non dyslexic children with chronological/reading age-matched. Binocular eye movements were recorded by an infrared system (mobileEBT®, e(ye)BRAIN) in twelve dyslexic children (mean age 11 years old) and a group of chronological age-matched (N = 9) and reading age-matched (N = 10) non dyslexic children. Two visual tasks were used: text reading and visual search. Independently of the task, the ocular motor behavior in dyslexic children is similar to those reported in reading age-matched non dyslexic children: many and longer fixations as well as poor quality of binocular coordination during and after the saccades. In contrast, chronological age-matched non dyslexic children showed a small number of fixations and short duration of fixations in reading task with respect to visual search task; furthermore their saccades were well yoked in both tasks. The atypical eye movement's patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an immaturity of the ocular motor saccade and vergence systems interaction. PMID:22438934
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
Neural correlates of context-dependent feature conjunction learning in visual search tasks.
Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U
2016-06-01
Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing
ERIC Educational Resources Information Center
Aslan, Asli; Aslan, Hurol
2007-01-01
The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…
Fractal fluctuations in gaze speed visual search.
Stephen, Damian G; Anastas, Jason
2011-04-01
Visual search involves a subtle coordination of visual memory and lower-order perceptual mechanisms. Specifically, the fluctuations in gaze may provide support for visual search above and beyond what may be attributed to memory. Prior research indicates that gaze during search exhibits fractal fluctuations, which allow for a wide sampling of the field of view. Fractal fluctuations constitute a case of fast diffusion that may provide an advantage in exploration. We present reanalyses of eye-tracking data collected by Stephen and Mirman (Cognition, 115, 154-165, 2010) for single-feature and conjunction search tasks. Fluctuations in gaze during these search tasks were indeed fractal. Furthermore, the degree of fractality predicted decreases in reaction time on a trial-by-trial basis. We propose that fractality may play a key role in explaining the efficacy of perceptual exploration.
Classification of visual and linguistic tasks using eye-movement features.
Coco, Moreno I; Keller, Frank
2014-03-07
The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).
Hybrid foraging search: Searching for multiple instances of multiple types of target.
Wolfe, Jeremy M; Aizenman, Avigael M; Boettcher, Sage E P; Cain, Matthew S
2016-02-01
This paper introduces the "hybrid foraging" paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8-64 target objects in memory. They viewed displays of 60-105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25-33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hybrid foraging search: Searching for multiple instances of multiple types of target
Wolfe, Jeremy M.; Aizenman, Avigael M.; Boettcher, Sage E.P.; Cain, Matthew S.
2016-01-01
This paper introduces the “hybrid foraging” paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8–64 targets objects in memory. They viewed displays of 60–105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25–33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. PMID:26731644
Selective maintenance in visual working memory does not require sustained visual attention.
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M
2013-08-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved
Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna
2015-11-01
Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.
The wisdom of crowds for visual search
Juni, Mordechai Z.; Eckstein, Miguel P.
2017-01-01
Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision. PMID:28490500
Do People Take Stimulus Correlations into Account in Visual Search (Open Source)
2016-03-10
RESEARCH ARTICLE Do People Take Stimulus Correlations into Account in Visual Search ? Manisha Bhardwaj1, Ronald van den Berg2,3, Wei Ji Ma2,4...visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often...contribute to bridging the gap between artificial and natural visual search tasks. Introduction Visual target detection in displays consisting of multiple
Context matters: the structure of task goals affects accuracy in multiple-target visual search.
Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R
2014-05-01
Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M
2016-02-01
In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items.
"Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search
ERIC Educational Resources Information Center
Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.
2013-01-01
Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…
Interrupted Visual Searches Reveal Volatile Search Memory
ERIC Educational Resources Information Center
Shen, Y. Jeremy; Jiang, Yuhong V.
2006-01-01
This study investigated memory from interrupted visual searches. Participants conducted a change detection search task on polygons overlaid on scenes. Search was interrupted by various disruptions, including unfilled delay, passive viewing of other scenes, and additional search on new displays. Results showed that performance was unaffected by…
Mental workload while driving: effects on visual search, discrimination, and decision making.
Recarte, Miguel A; Nunes, Luis M
2003-06-01
The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.
Acute exercise and aerobic fitness influence selective attention during visual search.
Bullock, Tom; Giesbrecht, Barry
2014-01-01
Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.
Acute exercise and aerobic fitness influence selective attention during visual search
Bullock, Tom; Giesbrecht, Barry
2014-01-01
Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094
Task relevance modulates the cortical representation of feature conjunctions in the target template.
Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan
2017-07-03
Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.
Selective Maintenance in Visual Working Memory Does Not Require Sustained Visual Attention
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.
2012-01-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in VWM. Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. PMID:23067118
Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo
2014-01-01
The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.
Task-relevant information is prioritized in spatiotemporal contextual cueing.
Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun
2016-11-01
Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.
ERIC Educational Resources Information Center
Eimer, Martin; Kiss, Monika; Nicholas, Susan
2011-01-01
When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features.…
Memory under pressure: secondary-task effects on contextual cueing of visual search.
Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas
2013-11-04
Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.
Searching for unity: Real-world versus item-based visual search in age-related eye disease.
Crabb, David P; Taylor, Deanna J
2017-01-01
When studying visual search, item-based approaches using synthetic targets and distractors limit the real-world applicability of results. Everyday visual search can be impaired in patients with common eye diseases like glaucoma and age-related macular degeneration. We highlight some results in the literature that suggest assessment of real-word search tasks in these patients could be clinically useful.
Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.
Andrews, T J; Coppola, D M
1999-08-01
Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.
Reading and visual search: a developmental study in normal children.
Seassau, Magali; Bucci, Maria-Pia
2013-01-01
Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence.
Body sway at sea for two visual tasks and three stance widths.
Stoffregen, Thomas A; Villard, Sebastien; Yu, Yawen
2009-12-01
On land, body sway is influenced by stance width (the distance between the feet) and by visual tasks engaged in during stance. While wider stance can be used to stabilize the body against ship motion and crewmembers are obliged to carry out many visual tasks while standing, the influence of these factors on the kinematics of body sway has not been studied at sea. Crewmembers of the RN Atlantis stood on a force plate from which we obtained data on the positional variability of the center of pressure (COP). The sea state was 2 on the Beaufort scale. We varied stance width (5 cm, 17 cm, and 30 cm) and the nature of the visual tasks. In the Inspection task, participants viewed a plain piece of white paper, while in the Search task they counted the number of target letters that appeared in a block of text. Search task performance was similar to reports from terrestrial studies. Variability of the COP position was reduced during the Search task relative to the Inspection task. Variability was also reduced during wide stance relative to narrow stance. The influence of stance width was greater than has been observed in terrestrial studies. These results suggest that two factors that influence postural sway on land (variations in stance width and in the nature of visual tasks) also influence sway at sea. We conclude that--in mild sea states--the influence of these factors is not suppressed by ship motion.
Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.
Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R
2014-01-01
Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.
Keehn, Brandon; Joseph, Robert M
2016-03-01
In multiple conjunction search, the target is not known in advance but is defined only with respect to the distractors in a given search array, thus reducing the contributions of bottom-up and top-down attentional and perceptual processes during search. This study investigated whether the superior visual search skills typically demonstrated by individuals with autism spectrum disorder (ASD) would be evident in multiple conjunction search. Thirty-two children with ASD and 32 age- and nonverbal IQ-matched typically developing (TD) children were administered a multiple conjunction search task. Contrary to findings from the large majority of studies on visual search in ASD, response times of individuals with ASD were significantly slower than those of their TD peers. Evidence of slowed performance in ASD suggests that the mechanisms responsible for superior ASD performance in other visual search paradigms are not available in multiple conjunction search. Although the ASD group failed to exhibit superior performance, they showed efficient search and intertrial priming levels similar to the TD group. Efficient search indicates that ASD participants were able to group distractors into distinct subsets. In summary, while demonstrating grouping and priming effects comparable to those exhibited by their TD peers, children with ASD were slowed in their performance on a multiple conjunction search task, suggesting that their usual superior performance in visual search tasks is specifically dependent on top-down and/or bottom-up attentional and perceptual processes. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
What Kind of Memory Supports Visual Marking?
ERIC Educational Resources Information Center
Jiang, Yuhong; Wang, Stephanie W.
2004-01-01
In visual search tasks, if a set of items is presented for 1 s before another set of new items (containing the target) is added, search can be restricted to the new set. The process that eliminates old items from search is visual marking. This study investigates the kind of memory that distinguishes the old items from the new items during search.…
Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.
Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U
2016-03-01
We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John
2002-01-01
The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.
Alvarez, George A; Gill, Jonathan; Cavanagh, Patrick
2012-01-01
Previous studies have shown independent attentional selection of targets in the left and right visual hemifields during attentional tracking (Alvarez & Cavanagh, 2005) but not during a visual search (Luck, Hillyard, Mangun, & Gazzaniga, 1989). Here we tested whether multifocal spatial attention is the critical process that operates independently in the two hemifields. It is explicitly required in tracking (attend to a subset of object locations, suppress the others) but not in the standard visual search task (where all items are potential targets). We used a modified visual search task in which observers searched for a target within a subset of display items, where the subset was selected based on location (Experiments 1 and 3A) or based on a salient feature difference (Experiments 2 and 3B). The results show hemifield independence in this subset visual search task with location-based selection but not with feature-based selection; this effect cannot be explained by general difficulty (Experiment 4). Combined, these findings suggest that hemifield independence is a signature of multifocal spatial attention and highlight the need for cognitive and neural theories of attention to account for anatomical constraints on selection mechanisms. PMID:22637710
Botly, Leigh C P; De Rosa, Eve
2012-10-01
The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.
Eye movements, visual search and scene memory, in an immersive virtual environment.
Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks
ERIC Educational Resources Information Center
Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour
2007-01-01
In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…
Development of a computerized visual search test.
Reid, Denise; Babani, Harsha; Jon, Eugenia
2009-09-01
Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed to provide an alternative to existing cancellation tests. Data from two pilot studies will be reported that examined some aspects of the test's validity. To date, our assessment of the test shows that it discriminates between healthy and head-injured persons. More research and development work is required to examine task performance changes in relation to task complexity. It is suggested that the conceptual design for the test is worthy of further investigation.
Exploring conflict- and target-related movement of visual attention.
Wendt, Mike; Garling, Marco; Luna-Rodriguez, Aquiles; Jacobsen, Thomas
2014-01-01
Intermixing trials of a visual search task with trials of a modified flanker task, the authors investigated whether the presentation of conflicting distractors at only one side (left or right) of a target stimulus triggers shifts of visual attention towards the contralateral side. Search time patterns provided evidence for lateral attention shifts only when participants performed the flanker task under an instruction assumed to widen the focus of attention, demonstrating that instruction-based control settings of an otherwise identical task can impact performance in an unrelated task. Contrasting conditions with response-related and response-unrelated distractors showed that shifting attention does not depend on response conflict and may be explained as stimulus-conflict-related withdrawal or target-related deployment of attention.
Enhancing visual search abilities of people with intellectual disabilities.
Li-Tsang, Cecilia W P; Wong, Jackson K K
2009-01-01
This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.
A Drastic Change in Background Luminance or Motion Degrades the Preview Benefit.
Osugi, Takayuki; Murakami, Ikuya
2017-01-01
When some distractors (old items) precede some others (new items) in an inefficient visual search task, the search is restricted to new items, and yields a phenomenon termed the preview benefit. It has recently been demonstrated that, in this preview search task, the onset of repetitive changes in the background disrupts the preview benefit, whereas a single transient change in the background does not. In the present study, we explored this effect with dynamic background changes occurring in the context of realistic scenes, to examine the robustness and usefulness of visual marking. We examined whether preview benefit in a preview search task survived through task-irrelevant changes in the scene, namely a luminance change and the initiation of coherent motion, both occurring in the background. Luminance change of the background disrupted preview benefit if it was synchronized with the onset of the search display. Furthermore, although the presence of coherent background motion per se did not affect preview benefit, its synchronized initiation with the onset of the search display did disrupt preview benefit if the motion speed was sufficiently high. These results suggest that visual marking can be destroyed by a transient event in the scene if that event is sufficiently drastic.
Eramudugolla, Ranmalee; Mattingley, Jason B
2008-01-01
Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.
Sleep-Effects on Implicit and Explicit Memory in Repeated Visual Search
Assumpcao, Leonardo; Gais, Steffen
2013-01-01
In repeated visual search tasks, facilitation of reaction times (RTs) due to repetition of the spatial arrangement of items occurs independently of RT facilitation due to improvements in general task performance. Whereas the latter represents typical procedural learning, the former is a kind of implicit memory that depends on the medial temporal lobe (MTL) memory system and is impaired in patients with amnesia. A third type of memory that develops during visual search is the observers’ explicit knowledge of repeated displays. Here, we used a visual search task to investigate whether procedural memory, implicit contextual cueing, and explicit knowledge of repeated configurations, which all arise independently from the same set of stimuli, are influenced by sleep. Observers participated in two experimental sessions, separated by either a nap or a controlled rest period. In each of the two sessions, they performed a visual search task in combination with an explicit recognition task. We found that (1) across sessions, MTL-independent procedural learning was more pronounced for the nap than rest group. This confirms earlier findings, albeit from different motor and perceptual tasks, showing that procedural memory can benefit from sleep. (2) Likewise, the sleep group compared with the rest group showed enhanced context-dependent configural learning in the second session. This is a novel finding, indicating that the MTL-dependent, implicit memory underlying contextual cueing is also sleep-dependent. (3) By contrast, sleep and wake groups displayed equivalent improvements in explicit recognition memory in the second session. Overall, the current study shows that sleep affects MTL-dependent as well as MTL-independent memory, but it affects different, albeit simultaneously acquired, forms of MTL-dependent memory differentially. PMID:23936363
Guidance of visual search by memory and knowledge.
Hollingworth, Andrew
2012-01-01
To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.
Words, shape, visual search and visual working memory in 3-year-old children.
Vales, Catarina; Smith, Linda B
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.
Azizi, Elham; Abel, Larry A; Stainer, Matthew J
2017-02-01
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.
Effect of display size on visual attention.
Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao
2011-06-01
Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.
VanMeerten, Nicolaas J; Dubke, Rachel E; Stanwyck, John J; Kang, Seung Suk; Sponheim, Scott R
2016-01-01
People with schizophrenia show deficits in processing visual stimuli but neural abnormalities underlying the deficits are unclear and it is unknown whether such functional brain abnormalities are present in other severe mental disorders or in individuals who carry genetic liability for schizophrenia. To better characterize brain responses underlying visual search deficits and test their specificity to schizophrenia we gathered behavioral and electrophysiological responses during visual search (i.e., Span of Apprehension [SOA] task) from 38 people with schizophrenia, 31 people with bipolar disorder, 58 biological relatives of people with schizophrenia, 37 biological relatives of people with bipolar disorder, and 65 non-psychiatric control participants. Through subtracting neural responses associated with purely sensory aspects of the stimuli we found that people with schizophrenia exhibited reduced early posterior task-related neural responses (i.e., Span Endogenous Negativity [SEN]) while other groups showed normative responses. People with schizophrenia exhibited longer reaction times than controls during visual search but nearly identical accuracy. Those individuals with schizophrenia who had larger SENs performed more efficiently (i.e., shorter reaction times) on the SOA task suggesting that modulation of early visual cortical responses facilitated their visual search. People with schizophrenia also exhibited a diminished P300 response compared to other groups. Unaffected first-degree relatives of people with bipolar disorder and schizophrenia showed an amplified N1 response over posterior brain regions in comparison to other groups. Diminished early posterior brain responses are associated with impaired visual search in schizophrenia and appear to be specifically associated with the neuropathology of schizophrenia. Published by Elsevier B.V.
Preattentive visual search and perceptual grouping in schizophrenia.
Carr, V J; Dewis, S A; Lewin, T J
1998-06-15
To help determine whether patients with schizophrenia show deficits in the stimulus-based aspects of preattentive processing, we undertook a series of experiments within the framework of feature integration theory. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing parallel and serial information processing (Experiment 1) and a task which examined the effects of perceptual grouping on visual search strategies (Experiment 2). We also assessed current symptomatology and its relationship to task performance. While the schizophrenia subjects had longer reaction times in Experiment 1, their overall pattern of performance across both experimental tasks was similar to that of the control subjects, and generally unrelated to current symptomatology. Predictions from feature integration theory about the impact of varying display size (Experiment 1) and number of perceptual groups (Experiment 2) on the detection of feature and conjunction targets were strongly supported. This study revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics. While subject and task characteristics may partially account for differences between this and previous studies, it is more likely that preattentive processing abnormalities in schizophrenia may occur only under conditions involving selected 'top-down' factors such as context and meaning.
Distractor devaluation requires visual working memory.
Goolsby, Brian A; Shapiro, Kimron L; Raymond, Jane E
2009-02-01
Visual stimuli seen previously as distractors in a visual search task are subsequently evaluated more negatively than those seen as targets. An attentional inhibition account for this distractor-devaluation effect posits that associative links between attentional inhibition and to-be-ignored stimuli are established during search, stored, and then later reinstantiated, implying that distractor devaluation may require visual working memory (WM) resources. To assess this, we measured distractor devaluation with and without a concurrent visual WM load. Participants viewed a memory array, performed a simple search task, evaluated one of the search items (or a novel item), and then viewed a memory test array. Although distractor devaluation was observed with low (and no) WM load, it was absent when WM load was increased. This result supports the notions that active association of current attentional states with stimuli requires WM and that memory for these associations plays a role in affective response.
Eye movements during visual search in patients with glaucoma
2012-01-01
Background Glaucoma has been shown to lead to disability in many daily tasks including visual search. This study aims to determine whether the saccadic eye movements of people with glaucoma differ from those of people with normal vision, and to investigate the association between eye movements and impaired visual search. Methods Forty patients (mean age: 67 [SD: 9] years) with a range of glaucomatous visual field (VF) defects in both eyes (mean best eye mean deviation [MD]: –5.9 (SD: 5.4) dB) and 40 age-related people with normal vision (mean age: 66 [SD: 10] years) were timed as they searched for a series of target objects in computer displayed photographs of real world scenes. Eye movements were simultaneously recorded using an eye tracker. Average number of saccades per second, average saccade amplitude and average search duration across trials were recorded. These response variables were compared with measurements of VF and contrast sensitivity. Results The average rate of saccades made by the patient group was significantly smaller than the number made by controls during the visual search task (P = 0.02; mean reduction of 5.6% (95% CI: 0.1 to 10.4%). There was no difference in average saccade amplitude between the patients and the controls (P = 0.09). Average number of saccades was weakly correlated with aspects of visual function, with patients with worse contrast sensitivity (PR logCS; Spearman’s rho: 0.42; P = 0.006) and more severe VF defects (best eye MD; Spearman’s rho: 0.34; P = 0.037) tending to make less eye movements during the task. Average detection time in the search task was associated with the average rate of saccades in the patient group (Spearman’s rho = −0.65; P < 0.001) but this was not apparent in the controls. Conclusions The average rate of saccades made during visual search by this group of patients was fewer than those made by people with normal vision of a similar average age. There was wide variability in saccade rate in the patients but there was an association between an increase in this measure and better performance in the search task. Assessment of eye movements in individuals with glaucoma might provide insight into the functional deficits of the disease. PMID:22937814
Is There a Limit to the Superiority of Individuals with ASD in Visual Search?
ERIC Educational Resources Information Center
Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal
2014-01-01
Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…
To search or to like: Mapping fixations to differentiate two forms of incidental scene memory.
Choe, Kyoung Whan; Kardan, Omid; Kotabe, Hiroki P; Henderson, John M; Berman, Marc G
2017-10-01
We employed eye-tracking to investigate how performing different tasks on scenes (e.g., intentionally memorizing them, searching for an object, evaluating aesthetic preference) can affect eye movements during encoding and subsequent scene memory. We found that scene memorability decreased after visual search (one incidental encoding task) compared to intentional memorization, and that preference evaluation (another incidental encoding task) produced better memory, similar to the incidental memory boost previously observed for words and faces. By analyzing fixation maps, we found that although fixation map similarity could explain how eye movements during visual search impairs incidental scene memory, it could not explain the incidental memory boost from aesthetic preference evaluation, implying that implicit mechanisms were at play. We conclude that not all incidental encoding tasks should be taken to be similar, as different mechanisms (e.g., explicit or implicit) lead to memory enhancements or decrements for different incidental encoding tasks.
ERIC Educational Resources Information Center
Olivers, Christian N. L.
2009-01-01
An important question is whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. Some past research has indicated that they do: Singleton distractors interfered more strongly with a visual search task when they…
Space-based visual attention: a marker of immature selective attention in toddlers?
Rivière, James; Brisson, Julie
2014-11-01
Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Visual search in Alzheimer's disease: a deficiency in processing conjunctions of features.
Tales, A; Butler, S R; Fossey, J; Gilchrist, I D; Jones, R W; Troscianko, T
2002-01-01
Human vision often needs to encode multiple characteristics of many elements of the visual field, for example their lightness and orientation. The paradigm of visual search allows a quantitative assessment of the function of the underlying mechanisms. It measures the ability to detect a target element among a set of distractor elements. We asked whether Alzheimer's disease (AD) patients are particularly affected in one type of search, where the target is defined by a conjunction of features (orientation and lightness) and where performance depends on some shifting of attention. Two non-conjunction control conditions were employed. The first was a pre-attentive, single-feature, "pop-out" task, detecting a vertical target among horizontal distractors. The second was a single-feature, partly attentive task in which the target element was slightly larger than the distractors-a "size" task. This was chosen to have a similar level of attentional load as the conjunction task (for the control group), but lacked the conjunction of two features. In an experiment, 15 AD patients were compared to age-matched controls. The results suggested that AD patients have a particular impairment in the conjunction task but not in the single-feature size or pre-attentive tasks. This may imply that AD particularly affects those mechanisms which compare across more than one feature type, and spares the other systems and is not therefore simply an 'attention-related' impairment. Additionally, these findings show a double dissociation with previous data on visual search in Parkinson's disease (PD), suggesting a different effect of these diseases on the visual pathway.
Visual search for feature and conjunction targets with an attention deficit.
Arguin, M; Joanette, Y; Cavanagh, P
1993-01-01
Abstract Brain-damaged subjects who had previously been identified as suffering from a visual attention deficit for contralesional stimulation were tested on a series of visual search tasks. The experiments examined the hypothesis that the processing of single features is preattentive but that feature integration, necessary for the correct perception of conjunctions of features, requires attention (Treisman & Gelade, 1980 Treisman & Sato, 1990). Subjects searched for a feature target (orientation or color) or for a conjunction target (orientation and color) in unilateral displays in which the number of items presented was variable. Ocular fixation was controlled so that trials on which eye movements occurred were cancelled. While brain-damaged subjects with a visual attention disorder (VAD subjects) performed similarly to normal controls in feature search tasks, they showed a marked deficit in conjunction search. Specifically, VAD subjects exhibited an important reduction of their serial search rates for a conjunction target with contralesional displays. In support of Treisman's feature integration theory, a visual attention deficit leads to a marked impairment in feature integration whereas it does not appear to affect feature encoding.
Visual search in Dementia with Lewy Bodies and Alzheimer's disease.
Landy, Kelly M; Salmon, David P; Filoteo, J Vincent; Heindel, William C; Galasko, Douglas; Hamilton, Joanne M
2015-12-01
Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer's disease (AD). To assess this possibility, the present study compared patients with DLB (n = 17), AD (n = 30), or Parkinson's disease with dementia (PDD; n = 10) to non-demented patients with PD (n = 18) and normal control (NC) participants (n = 13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target's salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., "pop-out" effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search "pop-out" effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visual Search in Dementia with Lewy Bodies and Alzheimer’s Disease
Landy, Kelly M.; Salmon, David P.; Filoteo, J. Vincent; Heindel, William C.; Galasko, Douglas; Hamilton, Joanne M.
2016-01-01
Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer’s disease (AD). To assess this possibility, the present study compared patients with DLB (n=17), AD (n=30), or Parkinson’s disease with dementia (PDD; n=10) to non-demented patients with PD (n=18) and normal control (NC) participants (n=13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target’s salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., “pop-out” effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search “pop-out” effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex. PMID:26476402
Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment
Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905
Attar, Nada; Schneps, Matthew H; Pomplun, Marc
2016-10-01
An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process.
Zhao, Dandan; Liang, Shengnan; Jin, Zhenlan; Li, Ling
2014-07-09
Previous studies have confirmed that attention can be modulated by the current task set while involuntarily captured by salient items. However, little is known on which factors the modulation of attentional capture is dependent on when the same stimuli with different task sets are presented. In the present study, participants conducted two visual search tasks with the same search arrays by varying target and distractor settings (color singleton as target, onset singleton as distractor, named as color task, and vice versa). Ipsilateral and contralateral color distractors resulted in two different relative saliences in two tasks, respectively. Both reaction times (RTs) and N2-posterior-contralateral (N2pc) results showed that there was no difference between ipsilateral and contralateral color distractors in the onset task. However, both RTs and the latency of N2pc showed a delay to the ipsilateral onset distractor compared with the contralateral onset distractor. Moreover, the N2pc observed under the contralateral distractor condition in the color task was reversed, and its amplitude was attenuated. On the basis of these results, we proposed a parameter called distractor cost (DC), computed by subtracting RTs under the contralateral distractor condition from the ipsilateral condition. The results suggest that an enhanced DC might be related to the modification of N2pc in searching for the color target. Taken together, these findings provide evidence that the effect of task set-modulating attentional capture in visual search is related to the DC.
Koslucher, Frank; Wade, Michael G; Nelson, Brent; Lim, Kelvin; Chen, Fu-Chen; Stoffregen, Thomas A
2012-07-01
Research has shown that the Nintendo Wii Balance Board (WBB) can reliably detect the quantitative kinematics of the center of pressure in stance. Previous studies used relatively coarse manipulations (1- vs. 2-leg stance, and eyes open vs. closed). We sought to determine whether the WBB could reliably detect postural changes associated with subtle variations in visual tasks. Healthy elderly adults stood on a WBB while performing one of two visual tasks. In the Inspection task, they maintained their gaze within the boundaries of a featureless target. In the Search task, they counted the occurrence of designated target letters within a block of text. Consistent with previous studies using traditional force plates, the positional variability of the center of pressure was reduced during performance of the Search task, relative to movement during performance of the Inspection task. Using detrended fluctuation analysis, a measure of movement dynamics, we found that COP trajectories were more predictable during performance of the Search task than during performance of the Inspection task. The results indicate that the WBB is sensitive to subtle variations in both the magnitude and dynamics of body sway that are related to variations in visual tasks engaged in during stance. The WBB is an inexpensive, reliable technology that can be used to evaluate subtle characteristics of body sway in large or widely dispersed samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children
ERIC Educational Resources Information Center
Vales, Catarina; Smith, Linda B.
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…
Visual scan-path analysis with feature space transient fixation moments
NASA Astrophysics Data System (ADS)
Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong
2003-05-01
The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.
Orthographic versus semantic matching in visual search for words within lists.
Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas
2012-03-01
An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.
When canary primes yellow: effects of semantic memory on overt attention.
Léger, Laure; Chauvet, Elodie
2015-02-01
This study explored how overt attention is influenced by the colour that is primed when a target word is read during a lexical visual search task. Prior studies have shown that attention can be influenced by conceptual or perceptual overlap between a target word and distractor pictures: attention is attracted to pictures that have the same form (rope--snake) or colour (green--frog) as the spoken target word or is drawn to an object from the same category as the spoken target word (trumpet--piano). The hypothesis for this study was that attention should be attracted to words displayed in the colour that is primed by reading a target word (for example, yellow for canary). An experiment was conducted in which participants' eye movements were recorded whilst they completed a lexical visual search task. The primary finding was that participants' eye movements were mainly directed towards words displayed in the colour primed by reading the target word, even though this colour was not relevant to completing the visual search task. This result is discussed in terms of top-down guidance of overt attention in visual search for words.
Kawashima, Tomoya; Matsumoto, Eriko
2016-03-23
Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations.
Observers' cognitive states modulate how visual inputs relate to gaze control.
Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G
2016-09-01
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Effects of contour enhancement on low-vision preference and visual search.
Satgunam, Premnandhini; Woods, Russell L; Luo, Gang; Bronstad, P Matthew; Reynolds, Zachary; Ramachandra, Chaithanya; Mel, Bartlett W; Peli, Eli
2012-09-01
To determine whether image enhancement improves visual search performance and whether enhanced images were also preferred by subjects with vision impairment. Subjects (n = 24) with vision impairment (vision: 20/52 to 20/240) completed visual search and preference tasks for 150 static images that were enhanced to increase object contours' visual saliency. Subjects were divided into two groups and were shown three enhancement levels. Original and medium enhancements were shown to both groups. High enhancement was shown to group 1, and low enhancement was shown to group 2. For search, subjects pointed to an object that matched a search target displayed at the top left of the screen. An "integrated search performance" measure (area under the curve of cumulative correct response rate over search time) quantified performance. For preference, subjects indicated the preferred side when viewing the same image with different enhancement levels on side-by-side high-definition televisions. Contour enhancement did not improve performance in the visual search task. Group 1 subjects significantly (p < 0.001) rejected the High enhancement, and showed no preference for medium enhancement over the original images. Group 2 subjects significantly preferred (p < 0.001) both the medium and the low enhancement levels over original. Contrast sensitivity was correlated with both preference and performance; subjects with worse contrast sensitivity performed worse in the search task (ρ = 0.77, p < 0.001) and preferred more enhancement (ρ = -0.47, p = 0.02). No correlation between visual search performance and enhancement preference was found. However, a small group of subjects (n = 6) in a narrow range of mid-contrast sensitivity performed better with the enhancement, and most (n = 5) also preferred the enhancement. Preferences for image enhancement can be dissociated from search performance in people with vision impairment. Further investigations are needed to study the relationships between preference and performance for a narrow range of mid-contrast sensitivity where a beneficial effect of enhancement may exist.
Mental fatigue impairs soccer-specific decision-making skill.
Smith, Mitchell R; Zeuwts, Linus; Lenoir, Matthieu; Hens, Nathalie; De Jong, Laura M S; Coutts, Aaron J
2016-07-01
This study aimed to investigate the impact of mental fatigue on soccer-specific decision-making. Twelve well-trained male soccer players performed a soccer-specific decision-making task on two occasions, separated by at least 72 h. The decision-making task was preceded in a randomised order by 30 min of the Stroop task (mental fatigue) or 30 min of reading from magazines (control). Subjective ratings of mental fatigue were measured before and after treatment, and mental effort (referring to treatment) and motivation (referring to the decision-making task) were measured after treatment. Performance on the soccer-specific decision-making task was assessed using response accuracy and time. Visual search behaviour was also assessed throughout the decision-making task. Subjective ratings of mental fatigue and effort were almost certainly higher following the Stroop task compared to the magazines. Motivation for the upcoming decision-making task was possibly higher following the Stroop task. Decision-making accuracy was very likely lower and response time likely higher in the mental fatigue condition. Mental fatigue had unclear effects on most visual search behaviour variables. The results suggest that mental fatigue impairs accuracy and speed of soccer-specific decision-making. These impairments are not likely related to changes in visual search behaviour.
Maekawa, Toru; de Brecht, Matthew; Yamagishi, Noriko
2018-01-01
The study of visual perception has largely been completed without regard to the influence that an individual’s emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual’s perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings. PMID:29664952
Maekawa, Toru; Anderson, Stephen J; de Brecht, Matthew; Yamagishi, Noriko
2018-01-01
The study of visual perception has largely been completed without regard to the influence that an individual's emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual's perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings.
Biggs, Adam T; Mitroff, Stephen R
2014-01-01
Visual search, locating target items among distractors, underlies daily activities ranging from critical tasks (e.g., looking for dangerous objects during security screening) to commonplace ones (e.g., finding your friends in a crowded bar). Both professional and nonprofessional individuals conduct visual searches, and the present investigation is aimed at understanding how they perform similarly and differently. We administered a multiple-target visual search task to both professional (airport security officers) and nonprofessional participants (members of the Duke University community) to determine how search abilities differ between these populations and what factors might predict accuracy. There were minimal overall accuracy differences, although the professionals were generally slower to respond. However, the factors that predicted accuracy varied drastically between groups; variability in search consistency-how similarly an individual searched from trial to trial in terms of speed-best explained accuracy for professional searchers (more consistent professionals were more accurate), whereas search speed-how long an individual took to complete a search when no targets were present-best explained accuracy for nonprofessional searchers (slower nonprofessionals were more accurate). These findings suggest that professional searchers may utilize different search strategies from those of nonprofessionals, and that search consistency, in particular, may provide a valuable tool for enhancing professional search accuracy.
The role of object categories in hybrid visual and memory search
Cunningham, Corbin A.; Wolfe, Jeremy M.
2014-01-01
In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054
Yashar, Amit; Denison, Rachel N
2017-12-01
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.
Feature reliability determines specificity and transfer of perceptual learning in orientation search
2017-01-01
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. PMID:29240813
Playing shooter and driving videogames improves top-down guidance in visual search.
Wu, Sijing; Spence, Ian
2013-05-01
Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.
Active visual search in non-stationary scenes: coping with temporal variability and uncertainty
NASA Astrophysics Data System (ADS)
Ušćumlić, Marija; Blankertz, Benjamin
2016-02-01
Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.
Cultural differences in attention: Eye movement evidence from a comparative visual search task.
Alotaibi, Albandri; Underwood, Geoffrey; Smith, Alastair D
2017-10-01
Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention. Copyright © 2017 Elsevier Inc. All rights reserved.
Task relevance predicts gaze in videos of real moving scenes.
Howard, Christina J; Gilchrist, Iain D; Troscianko, Tom; Behera, Ardhendu; Hogg, David C
2011-09-01
Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.
Hatta, Takeshi; Kato, Kimiko; Hotta, Chie; Higashikawa, Mari; Iwahara, Akihiko; Hatta, Taketoshi; Hatta, Junko; Fujiwara, Kazumi; Nagahara, Naoko; Ito, Emi; Hamajima, Nobuyuki
2017-01-01
The validity of Bucur and Madden's (2010) proposal that an age-related decline is particularly pronounced in executive function measures rather than in elementary perceptual speed measures was examined via the Yakumo Study longitudinal database. Their proposal suggests that cognitive load differentially affects cognitive abilities in older adults. To address their proposal, linear regression coefficients of 104 participants were calculated individually for the digit cancellation task 1 (D-CAT1), where participants search for a given single digit, and the D-CAT3, where they search for 3 digits simultaneously. Therefore, it can be conjectured that the D-CAT1 represents primarily elementary perceptual speed and low-visual search load task. whereas the D-CAT3 represents primarily executive function and high-visual search load task. Regression coefficients from age 65 to 75 for the D-CAT3 showed a significantly steeper decline than that for the D-CAT1, and a large number of participants showed this tendency. These results support the proposal by Brcur and Madden (2010) and suggest that the degree of cognitive load affects age-related cognitive decline.
Prado, Chloé; Dubois, Matthieu; Valdois, Sylviane
2007-09-01
The eye movements of 14 French dyslexic children having a VA span reduction and 14 normal readers were compared in two tasks of visual search and text reading. The dyslexic participants made a higher number of rightward fixations in reading only. They simultaneously processed the same low number of letters in both tasks whereas normal readers processed far more letters in reading. Importantly, the children's VA span abilities related to the number of letters simultaneously processed in reading. The atypical eye movements of some dyslexic readers in reading thus appear to reflect difficulties to increase their VA span according to the task request.
Gordon, Barry
2018-01-01
Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513
Sung, Kyongje; Gordon, Barry
2018-01-01
Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.
ERIC Educational Resources Information Center
Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang
2006-01-01
We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…
Eimer, Martin; Kiss, Monika; Nicholas, Susan
2011-12-01
When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features. Visual search arrays contained two different color singleton digits, and participants had to select one of these as target and report its parity. Target color was either known in advance (fixed color task) or had to be selected anew on each trial (free color-choice task). ERP correlates of spatially selective attentional target selection (N2pc) and working memory processing (SPCN) demonstrated rapid target selection and efficient exclusion of color singleton distractors from focal attention and working memory in the fixed color task. In the free color-choice task, spatially selective processing also emerged rapidly, but selection efficiency was reduced, with nontarget singleton digits capturing attention and gaining access to working memory. Results demonstrate the benefits of top-down task sets: Feature-specific advance preparation accelerates target selection, rapidly resolves attentional competition, and prevents irrelevant events from attracting attention and entering working memory.
Biometric recognition via texture features of eye movement trajectories in a visual searching task.
Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang
2018-01-01
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.
Biometric recognition via texture features of eye movement trajectories in a visual searching task
Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei
2018-01-01
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383
Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping
ERIC Educational Resources Information Center
McDougall, Sine; Tyrer, Victoria; Folkard, Simon
2006-01-01
Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…
Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search
ERIC Educational Resources Information Center
Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.
2011-01-01
In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…
Visual Tasks and Postural Sway in Children with and without Autism Spectrum Disorders
ERIC Educational Resources Information Center
Chang, Chih-Hui; Wade, Michael G.; Stoffregen, Thomas A.; Hsu, Chin-Yu; Pan, Chien-Yu
2010-01-01
We investigated the influences of two different suprapostural visual tasks, visual searching and visual inspection, on the postural sway of children with and without autism spectrum disorder (ASD). Sixteen ASD children (age=8.75 [plus or minus] 1.34 years; height=130.34 [plus or minus] 11.03 cm) were recruited from a local support group.…
Visual Experience Enhances Infants' Use of Task-Relevant Information in an Action Task
ERIC Educational Resources Information Center
Wang, Su-hua; Kohne, Lisa
2007-01-01
Four experiments examined whether infants' use of task-relevant information in an action task could be facilitated by visual experience in the laboratory. Twelve- but not 9-month-old infants spontaneously used height information and chose an appropriate (taller) cover in search of a hidden tall toy. After watching examples of covering events in a…
Visual Search by Children with and without ADHD
ERIC Educational Resources Information Center
Mullane, Jennifer C.; Klein, Raymond M.
2008-01-01
Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a…
Insights into the Control of Attentional Set in ADHD Using the Attentional Blink Paradigm
ERIC Educational Resources Information Center
Mason, Deanna J.; Humphreys, Glyn W.; Kent, Lindsey
2005-01-01
Background: Previous work on visual selective attention in Attention Deficit Hyperactivity Disorder (ADHD) has utilised spatial search paradigms. This study compared ADHD to control children on a temporal search task using Rapid Serial Visual Presentation (RSVP). In addition, the effects of irrelevant singleton distractors on search performance…
Driver landmark and traffic sign identification in early Alzheimer's disease.
Uc, E Y; Rizzo, M; Anderson, S W; Shi, Q; Dawson, J D
2005-06-01
To assess visual search and recognition of roadside targets and safety errors during a landmark and traffic sign identification task in drivers with Alzheimer's disease. 33 drivers with probable Alzheimer's disease of mild severity and 137 neurologically normal older adults underwent a battery of visual and cognitive tests and were asked to report detection of specific landmarks and traffic signs along a segment of an experimental drive. The drivers with mild Alzheimer's disease identified significantly fewer landmarks and traffic signs and made more at-fault safety errors during the task than control subjects. Roadside target identification performance and safety errors were predicted by scores on standardised tests of visual and cognitive function. Drivers with Alzheimer's disease are impaired in a task of visual search and recognition of roadside targets; the demands of these targets on visual perception, attention, executive functions, and memory probably increase the cognitive load, worsening driving safety.
ERIC Educational Resources Information Center
Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca
2011-01-01
Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…
Evidence for unlimited capacity processing of simple features in visual cortex
White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.
2017-01-01
Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964
Memory-Based Attention Capture when Multiple Items Are Maintained in Visual Working Memory
Hollingworth, Andrew; Beck, Valerie M.
2016-01-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search—an index of VWM guidance—is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when two colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. PMID:27123681
The effect of encoding conditions on learning in the prototype distortion task.
Lee, Jessica C; Livesey, Evan J
2017-06-01
The prototype distortion task demonstrates that it is possible to learn about a category of physically similar stimuli through mere observation. However, there have been few attempts to test whether different encoding conditions affect learning in this task. This study compared prototypicality gradients produced under incidental learning conditions in which participants performed a visual search task, with those produced under intentional learning conditions in which participants were required to memorize the stimuli. Experiment 1 showed that similar prototypicality gradients could be obtained for category endorsement and familiarity ratings, but also found (weaker) prototypicality gradients in the absence of exposure. In Experiments 2 and 3, memorization was found to strengthen prototypicality gradients in familiarity ratings in comparison to visual search, but there were no group differences in participants' ability to discriminate between novel and presented exemplars. Although the Search groups in Experiments 2 and 3 produced prototypicality gradients, they were no different in magnitude to those produced in the absence of stimulus exposure in Experiment 1, suggesting that incidental learning during visual search was not conducive to producing prototypicality gradients. This study suggests that learning in the prototype distortion task is not implicit in the sense of resulting automatically from exposure, is affected by the nature of encoding, and should be considered in light of potential learning-at-test effects.
The influence of artificial scotomas on eye movements during visual search.
Cornelissen, Frans W; Bruin, Klaas J; Kooijman, Aart C
2005-01-01
Fixation durations are normally adapted to the difficulty of the foveal analysis task. We examine to what extent artificial central and peripheral visual field defects interfere with this adaptation process. Subjects performed a visual search task while their eye movements were registered. The latter were used to drive a real-time gaze-dependent display that was used to create artificial central and peripheral visual field defects. Recorded eye movements were used to determine saccadic amplitude, number of fixations, fixation durations, return saccades, and changes in saccade direction. For central defects, although fixation duration increased with the size of the absolute central scotoma, this increase was too small to keep recognition performance optimal, evident from an associated increase in the rate of return saccades. Providing a relatively small amount of visual information in the central scotoma did substantially reduce subjects' search times but not their fixation durations. Surprisingly, reducing the size of the tunnel also prolonged fixation duration for peripheral defects. This manipulation also decreased the rate of return saccades, suggesting that the fixations were prolonged beyond the duration required by the foveal task. Although we find that adaptation of fixation duration to task difficulty clearly occurs in the presence of artificial scotomas, we also find that such field defects may render the adaptation suboptimal for the task at hand. Thus, visual field defects may not only hinder vision by limiting what the subject sees of the environment but also by limiting the visual system's ability to program efficient eye movements. We speculate this is because of how visual field defects bias the balance between saccade generation and fixation stabilization.
Memory-based attention capture when multiple items are maintained in visual working memory.
Hollingworth, Andrew; Beck, Valerie M
2016-07-01
Efficient visual search requires that attention is guided strategically to relevant objects, and most theories of visual search implement this function by means of a target template maintained in visual working memory (VWM). However, there is currently debate over the architecture of VWM-based attentional guidance. We contrasted a single-item-template hypothesis with a multiple-item-template hypothesis, which differ in their claims about structural limits on the interaction between VWM representations and perceptual selection. Recent evidence from van Moorselaar, Theeuwes, and Olivers (2014) indicated that memory-based capture during search, an index of VWM guidance, is not observed when memory set size is increased beyond a single item, suggesting that multiple items in VWM do not guide attention. In the present study, we maximized the overlap between multiple colors held in VWM and the colors of distractors in a search array. Reliable capture was observed when 2 colors were held in VWM and both colors were present as distractors, using both the original van Moorselaar et al. singleton-shape search task and a search task that required focal attention to array elements (gap location in outline square stimuli). In the latter task, memory-based capture was consistent with the simultaneous guidance of attention by multiple VWM representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil
2014-03-01
Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.
Almeida, Renita A; Dickinson, J Edwin; Maybery, Murray T; Badcock, Johanna C; Badcock, David R
2010-12-01
The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial frequency (RF) patterns with controllable amounts of target/distracter overlap on which high AQ participants showed more efficient search than low AQ observers. The current study extended the design of this search task by adding two lines which traverse the display on random paths sometimes intersecting target/distracters, other times passing between them. As with the EFT, these lines segment and group the display in ways that are task irrelevant. We tested two new groups of observers and found that while RF search was slowed by the addition of segmenting lines for both groups, the high AQ group retained a consistent search advantage (reflected in a shallower gradient for reaction time as a function of set size) over the low AQ group. Further, the high AQ group were significantly faster and more accurate on the EFT compared to the low AQ group. That is, the results from the present RF search task demonstrate that segmentation and grouping created by intersecting lines does not further differentiate the groups and is therefore unlikely to be a critical factor underlying the EFT performance difference. However, once again, we found that superior EFT performance was associated with shallower gradients on the RF search task. Copyright © 2010 Elsevier Ltd. All rights reserved.
Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search
Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.
2012-01-01
Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766
Peripheral vision of youths with low vision: motion perception, crowding, and visual search.
Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S
2012-08-24
Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.
Grubert, Anna; Eimer, Martin
2013-10-01
To find out whether attentional target selection can be effectively guided by top-down task sets for multiple colors, we measured behavioral and ERP markers of attentional target selection in an experiment where participants had to identify color-defined target digits that were accompanied by a single gray distractor object in the opposite visual field. In the One Color task, target color was constant. In the Two Color task, targets could have one of two equally likely colors. Color-guided target selection was less efficient during multiple-color relative to single-color search, and this was reflected by slower response times and delayed N2pc components. Nontarget-color items that were presented in half of all trials captured attention and gained access to working memory when participants searched for two colors, but were excluded from attentional processing in the One Color task. Results demonstrate qualitative differences in the guidance of attentional target selection between single-color and multiple-color visual search. They suggest that top-down attentional control can be applied much more effectively when it is based on a single feature-specific attentional template. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Explicit awareness supports conditional visual search in the retrieval guidance paradigm.
Buttaccio, Daniel R; Lange, Nicholas D; Hahn, Sowon; Thomas, Rick P
2014-01-01
In four experiments we explored whether participants would be able to use probabilistic prompts to simplify perceptually demanding visual search in a task we call the retrieval guidance paradigm. On each trial a memory prompt appeared prior to (and during) the search task and the diagnosticity of the prompt(s) was manipulated to provide complete, partial, or non-diagnostic information regarding the target's color on each trial (Experiments 1-3). In Experiment 1 we found that the more diagnostic prompts was associated with faster visual search performance. However, similar visual search behavior was observed in Experiment 2 when the diagnosticity of the prompts was eliminated, suggesting that participants in Experiment 1 were merely relying on base rate information to guide search and were not utilizing the prompts. In Experiment 3 participants were informed of the relationship between the prompts and the color of the target and this was associated with faster search performance relative to Experiment 1, suggesting that the participants were using the prompts to guide search. Additionally, in Experiment 3 a knowledge test was implemented and performance in this task was associated with qualitative differences in search behavior such that participants that were able to name the color(s) most associated with the prompts were faster to find the target than participants who were unable to do so. However, in Experiments 1-3 diagnosticity of the memory prompt was manipulated via base rate information, making it possible that participants were merely relying on base rate information to inform search in Experiment 3. In Experiment 4 we manipulated diagnosticity of the prompts without manipulating base rate information and found a similar pattern of results as Experiment 3. Together, the results emphasize the importance of base rate and diagnosticity information in visual search behavior. In the General discussion section we explore how a recent computational model of hypothesis generation (HyGene; Thomas, Dougherty, Sprenger, & Harbison, 2008), linking attention with long-term and working memory, accounts for the present results and provides a useful framework of cued recall visual search. Copyright © 2013 Elsevier B.V. All rights reserved.
Wolfe, Jeremy M.; Boettcher, Sage E. P.; Josephs, Emilie L.; Cunningham, Corbin A.; Drew, Trafton
2015-01-01
In “hybrid” search tasks, observers hold multiple possible targets in memory while searching for those targets amongst distractor items in visual displays. Wolfe (2012) found that, if the target set is held constant over a block of trials, RTs in such tasks were a linear function of the number of items in the visual display and a linear function of the log of the number of items held in memory. However, in such tasks, the targets can become far more familiar than the distractors. Does this “familiarity” – operationalized here as the frequency and recency with which an item has appeared – influence performance in hybrid tasks In Experiment 1, we compared searches where distractors appeared with the same frequency as the targets to searches where all distractors were novel. Distractor familiarity did not have any reliable effect on search. In Experiment 2, most distractors were novel but some critical distractors were as common as the targets while others were 4× more common. Familiar distractors did not produce false alarm errors, though they did slightly increase response times (RTs). In Experiment 3, observers successfully searched for the new, unfamiliar item among distractors that, in many cases, had been seen only once before. We conclude that when the memory set is held constant for many trials, item familiarity alone does not cause observers to mistakenly confuse target with distractors. PMID:26191615
Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search
ERIC Educational Resources Information Center
Calvo, Manuel G.; Nummenmaa, Lauri
2008-01-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…
The Forest, the Trees, and the Leaves: Differences of Processing across Development
ERIC Educational Resources Information Center
Krakowski, Claire-Sara; Poirel, Nicolas; Vidal, Julie; Roëll, Margot; Pineau, Arlette; Borst, Grégoire; Houdé, Olivier
2016-01-01
To act and think, children and adults are continually required to ignore irrelevant visual information to focus on task-relevant items. As real-world visual information is organized into structures, we designed a feature visual search task containing 3-level hierarchical stimuli (i.e., local shapes that constituted intermediate shapes that formed…
How visual working memory contents influence priming of visual attention.
Carlisle, Nancy B; Kristjánsson, Árni
2017-04-12
Recent evidence shows that when the contents of visual working memory overlap with targets and distractors in a pop-out search task, intertrial priming is inhibited (Kristjánsson, Sævarsson & Driver, Psychon Bull Rev 20(3):514-521, 2013, Experiment 2, Psychonomic Bulletin and Review). This may reflect an interesting interaction between implicit short-term memory-thought to underlie intertrial priming-and explicit visual working memory. Evidence from a non-pop-out search task suggests that it may specifically be holding distractors in visual working memory that disrupts intertrial priming (Cunningham & Egeth, Psychol Sci 27(4):476-485, 2016, Experiment 2, Psychological Science). We examined whether the inhibition of priming depends on whether feature values in visual working memory overlap with targets or distractors in the pop-out search, and we found that the inhibition of priming resulted from holding distractors in visual working memory. These results are consistent with separate mechanisms of target and distractor effects in intertrial priming, and support the notion that the impact of implicit short-term memory and explicit visual working memory can interact when each provides conflicting attentional signals.
Reduced posterior parietal cortex activation after training on a visual search task.
Bueichekú, Elisenda; Miró-Padilla, Anna; Palomar-García, María-Ángeles; Ventura-Campos, Noelia; Parcet, María-Antonia; Barrós-Loscertales, Alfonso; Ávila, César
2016-07-15
Gaining experience on a cognitive task improves behavioral performance and is thought to enhance brain efficiency. Despite the body of literature already published on the effects of training on brain activation, less research has been carried out on visual search attention processes under well controlled conditions. Thirty-six healthy adults divided into trained and control groups completed a pre-post letter-based visual search task fMRI study in one day. Twelve letters were used as targets and ten as distractors. The trained group completed a training session (840 trials) with half the targets between scans. The effects of training were studied at the behavioral and brain levels by controlling for repetition effects using both between-subjects (trained vs. control groups) and within-subject (trained vs. untrained targets) controls. The trained participants reduced their response speed by 31% as a result of training, maintaining their accuracy scores, whereas the control group hardly changed. Neural results revealed that brain changes associated with visual search training were circumscribed to reduced activation in the posterior parietal cortex (PPC) when controlling for group, and they included inferior occipital areas when controlling for targets. The observed behavioral and brain changes are discussed in relation to automatic behavior development. The observed training-related decreases could be associated with increased neural efficiency in specific key regions for task performance. Copyright © 2016 Elsevier Inc. All rights reserved.
Specific-Token Effects in Screening Tasks: Possible Implications for Aviation Security
ERIC Educational Resources Information Center
Smith, J. David; Redford, Joshua S.; Washburn, David A.; Taglialatela, Lauren A.
2005-01-01
Screeners at airport security checkpoints perform an important categorization task in which they search for threat items in complex x-ray images. But little is known about how the processes of categorization stand up to visual complexity. The authors filled this research gap with screening tasks in which participants searched for members of target…
Visual search performance among persons with schizophrenia as a function of target eccentricity.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2010-03-01
The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved
Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua
2016-01-01
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.
Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua
2016-01-01
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530
Utz, Kathrin S.; Hankeln, Thomas M. A.; Jung, Lena; Lämmer, Alexandra; Waschbisch, Anne; Lee, De-Hyung; Linker, Ralf A.; Schenk, Thomas
2013-01-01
Background Despite the high frequency of cognitive impairment in multiple sclerosis, its assessment has not gained entrance into clinical routine yet, due to lack of time-saving and suitable tests for patients with multiple sclerosis. Objective The aim of the study was to compare the paradigm of visual search with neuropsychological standard tests, in order to identify the test that discriminates best between patients with multiple sclerosis and healthy individuals concerning cognitive functions, without being susceptible to practice effects. Methods Patients with relapsing remitting multiple sclerosis (n = 38) and age-and gender-matched healthy individuals (n = 40) were tested with common neuropsychological tests and a computer-based visual search task, whereby a target stimulus has to be detected amongst distracting stimuli on a touch screen. Twenty-eight of the healthy individuals were re-tested in order to determine potential practice effects. Results Mean reaction time reflecting visual attention and movement time indicating motor execution in the visual search task discriminated best between healthy individuals and patients with multiple sclerosis, without practice effects. Conclusions Visual search is a promising instrument for the assessment of cognitive functions and potentially cognitive changes in patients with multiple sclerosis thanks to its good discriminatory power and insusceptibility to practice effects. PMID:24282604
Illusory conjunctions and perceptual grouping in a visual search task in schizophrenia.
Carr, V J; Dewis, S A; Lewin, T J
1998-07-27
This report describes part of a series of experiments, conducted within the framework of feature integration theory, to determine whether patients with schizophrenia show deficits in preattentive processing. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing the frequency of illusory conjunctions (i.e. false perceptions) under conditions of divided attention (Experiment 3) and a task which examined the effects of perceptual grouping on illusory conjunctions (Experiment 4). We also assessed current symptomatology and its relationship to task performance. Contrary to our hypotheses, schizophrenia subjects did not show higher rates of illusory conjunctions, and the influence of perceptual grouping on the frequency of illusory conjunctions was similar for schizophrenia and control subjects. Nonetheless, specific predictions from feature integration theory about the impact of different target types (Experiment 3) and perceptual groups (Experiment 4) on the likelihood of forming an illusory conjunction were strongly supported, thereby confirming the integrity of the experimental procedures. Overall, these studies revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics.
Stimulus information contaminates summation tests of independent neural representations of features
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2002-01-01
Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.
Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma.
Kasneci, Enkelejda; Black, Alex A; Wood, Joanne M
2017-01-01
To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.
Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma
Black, Alex A.
2017-01-01
To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior. PMID:28293433
Attentional Predictors of 5-month-olds' Performance on a Looking A-not-B Task.
Marcovitch, Stuart; Clearfield, Melissa W; Swingler, Margaret; Calkins, Susan D; Bell, Martha Ann
2016-01-01
In the first year of life, the ability to search for hidden objects is an indicator of object permanence and, when multiple locations are involved, executive function (i.e. inhibition, cognitive flexibility and working memory). The current study was designed to examine attentional predictors of search in 5-month-old infants (as measured by the looking A-not-B task), and whether levels of maternal education moderated the effect of the predictors. Specifically, in a separate task, the infants were shown a unique puppet, and we measured the percentage of time attending to the puppet, as well as the length of the longest look (i.e., peak fixation) directed towards the puppet. Across the entire sample ( N =390), the percentage of time attending to the puppet was positively related to performance on the visual A-not-B task. However, for infants whose mothers had not completed college, having a shorter peak looking time (after controlling for percentage of time) was also a predictor of visual A-not-B performance. The role of attention, peak fixation and maternal education in visual search is discussed.
Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming
2013-12-01
Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.
Similarity relations in visual search predict rapid visual categorization
Mohan, Krithika; Arun, S. P.
2012-01-01
How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947
Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil
2014-01-01
Background Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. Aims The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Materials and methods Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. Results The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. Conclusion We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI. PMID:24683515
Visual attention in a complex search task differs between honeybees and bumblebees.
Morawetz, Linde; Spaethe, Johannes
2012-07-15
Mechanisms of spatial attention are used when the amount of gathered information exceeds processing capacity. Such mechanisms have been proposed in bees, but have not yet been experimentally demonstrated. We provide evidence that selective attention influences the foraging performance of two social bee species, the honeybee Apis mellifera and the bumblebee Bombus terrestris. Visual search tasks, originally developed for application in human psychology, were adapted for behavioural experiments on bees. We examined the impact of distracting visual information on search performance, which we measured as error rate and decision time. We found that bumblebees were significantly less affected by distracting objects than honeybees. Based on the results, we conclude that the search mechanism in honeybees is serial like, whereas in bumblebees it shows the characteristics of a restricted parallel-like search. Furthermore, the bees differed in their strategy to solve the speed-accuracy trade-off. Whereas bumblebees displayed slow but correct decision-making, honeybees exhibited fast and inaccurate decision-making. We propose two neuronal mechanisms of visual information processing that account for the different responses between honeybees and bumblebees, and we correlate species-specific features of the search behaviour to differences in habitat and life history.
Kane, Michael J; Poole, Bradley J; Tuholski, Stephen W; Engle, Randall W
2006-07-01
The executive attention theory of working memory capacity (WMC) proposes that measures of WMC broadly predict higher order cognitive abilities because they tap important and general attention capabilities (R. W. Engle & M. J. Kane, 2004). Previous research demonstrated WMC-related differences in attention tasks that required restraint of habitual responses or constraint of conscious focus. To further specify the executive attention construct, the present experiments sought boundary conditions of the WMC-attention relation. Three experiments correlated individual differences in WMC, as measured by complex span tasks, and executive control of visual search. In feature-absence search, conjunction search, and spatial configuration search, WMC was unrelated to search slopes, although they were large and reliably measured. Even in a search task designed to require the volitional movement of attention (J. M. Wolfe, G. A. Alvarez, & T. S. Horowitz, 2000), WMC was irrelevant to performance. Thus, WMC is not associated with all demanding or controlled attention processes, which poses problems for some general theories of WMC. Copyright 2006 APA, all rights reserved.
Behavior and neural basis of near-optimal visual search
Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre
2013-01-01
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276
Visual-search models for location-known detection tasks
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.
2017-03-01
Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.
Investigation of Neural Strategies of Visual Search
NASA Technical Reports Server (NTRS)
Krauzlis, Richard J.
2003-01-01
The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.
The mechanisms underlying the ASD advantage in visual search
Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S.; Blaser, Erik
2013-01-01
A number of studies have demonstrated that individuals with Autism Spectrum Disorders (ASD) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin & Frith, 2005; Simmons, et al., 2009). This “ASD advantage” was first identified in the domain of visual search by Plaisted and colleagues (Plaisted, O’Riordan, & Baron-Cohen, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that - across development and a broad range of symptom severity - individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to ‘enhanced perceptual discrimination’, a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O’Riordan, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn, Muller, & Townsend, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests. PMID:24091470
On the role of working memory in spatial contextual cueing.
Travis, Susan L; Mattingley, Jason B; Dux, Paul E
2013-01-01
The human visual system receives more information than can be consciously processed. To overcome this capacity limit, we employ attentional mechanisms to prioritize task-relevant (target) information over less relevant (distractor) information. Regularities in the environment can facilitate the allocation of attention, as demonstrated by the spatial contextual cueing paradigm. When observers are exposed repeatedly to a scene and invariant distractor information, learning from earlier exposures enhances the search for the target. Here, we investigated whether spatial contextual cueing draws on spatial working memory resources and, if so, at what level of processing working memory load has its effect. Participants performed 2 tasks concurrently: a visual search task, in which the spatial configuration of some search arrays occasionally repeated, and a spatial working memory task. Increases in working memory load significantly impaired contextual learning. These findings indicate that spatial contextual cueing utilizes working memory resources.
Eye movements and attention in reading, scene perception, and visual search.
Rayner, Keith
2009-08-01
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.
Task relevance of emotional information affects anxiety-linked attention bias in visual search.
Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies
2017-01-01
Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.
Madden, David J.
2007-01-01
Older adults are often slower and less accurate than are younger adults in performing visual-search tasks, suggesting an age-related decline in attentional functioning. Age-related decline in attention, however, is not entirely pervasive. Visual search that is based on the observer’s expectations (i.e., top-down attention) is relatively preserved as a function of adult age. Neuroimaging research suggests that age-related decline occurs in the structure and function of brain regions mediating the visual sensory input, whereas activation of regions in the frontal and parietal lobes is often greater for older adults than for younger adults. This increased activation may represent an age-related increase in the role of top-down attention during visual tasks. To obtain a more complete account of age-related decline and preservation of visual attention, current research is beginning to explore the relation of neuroimaging measures of brain structure and function to behavioral measures of visual attention. PMID:18080001
Binocular Glaucomatous Visual Field Loss and Its Impact on Visual Exploration - A Supermarket Study
Aehling, Kathrin; Heister, Martin; Rosenstiel, Wolfgang; Schiefer, Ulrich; Papageorgiou, Elena
2014-01-01
Advanced glaucomatous visual field loss may critically interfere with quality of life. The purpose of this study was to (i) assess the impact of binocular glaucomatous visual field loss on a supermarket search task as an example of everyday living activities, (ii) to identify factors influencing the performance, and (iii) to investigate the related compensatory mechanisms. Ten patients with binocular glaucoma (GP), and ten healthy-sighted control subjects (GC) were asked to collect twenty different products chosen randomly in two supermarket racks as quickly as possible. The task performance was rated as “passed” or “failed” with regard to the time per correctly collected item. Based on the performance of control subjects, the threshold value for failing the task was defined as μ+3σ (in seconds per correctly collected item). Eye movements were recorded by means of a mobile eye tracker. Eight out of ten patients with glaucoma and all control subjects passed the task. Patients who failed the task needed significantly longer time (111.47 s ±12.12 s) to complete the task than patients who passed (64.45 s ±13.36 s, t-test, p<0.001). Furthermore, patients who passed the task showed a significantly higher number of glances towards the visual field defect (VFD) area than patients who failed (t-test, p<0.05). According to these results, glaucoma patients with defects in the binocular visual field display on average longer search times in a naturalistic supermarket task. However, a considerable number of patients, who compensate by frequent glancing towards the VFD, showed successful task performance. Therefore, systematic exploration of the VFD area seems to be a “time-effective” compensatory mechanism during the present supermarket task. PMID:25162522
Binocular glaucomatous visual field loss and its impact on visual exploration--a supermarket study.
Sippel, Katrin; Kasneci, Enkelejda; Aehling, Kathrin; Heister, Martin; Rosenstiel, Wolfgang; Schiefer, Ulrich; Papageorgiou, Elena
2014-01-01
Advanced glaucomatous visual field loss may critically interfere with quality of life. The purpose of this study was to (i) assess the impact of binocular glaucomatous visual field loss on a supermarket search task as an example of everyday living activities, (ii) to identify factors influencing the performance, and (iii) to investigate the related compensatory mechanisms. Ten patients with binocular glaucoma (GP), and ten healthy-sighted control subjects (GC) were asked to collect twenty different products chosen randomly in two supermarket racks as quickly as possible. The task performance was rated as "passed" or "failed" with regard to the time per correctly collected item. Based on the performance of control subjects, the threshold value for failing the task was defined as μ+3σ (in seconds per correctly collected item). Eye movements were recorded by means of a mobile eye tracker. Eight out of ten patients with glaucoma and all control subjects passed the task. Patients who failed the task needed significantly longer time (111.47 s ±12.12 s) to complete the task than patients who passed (64.45 s ±13.36 s, t-test, p < 0.001). Furthermore, patients who passed the task showed a significantly higher number of glances towards the visual field defect (VFD) area than patients who failed (t-test, p < 0.05). According to these results, glaucoma patients with defects in the binocular visual field display on average longer search times in a naturalistic supermarket task. However, a considerable number of patients, who compensate by frequent glancing towards the VFD, showed successful task performance. Therefore, systematic exploration of the VFD area seems to be a "time-effective" compensatory mechanism during the present supermarket task.
Foveated model observers to predict human performance in 3D images
NASA Astrophysics Data System (ADS)
Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.
2017-03-01
We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.
NASA Technical Reports Server (NTRS)
Remington, Roger; Williams, Douglas
1986-01-01
Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.
Age-related changes in conjunctive visual search in children with and without ASD.
Iarocci, Grace; Armstrong, Kimberly
2014-04-01
Visual-spatial strengths observed among people with autism spectrum disorder (ASD) may be associated with increased efficiency of selective attention mechanisms such as visual search. In a series of studies, researchers examined the visual search of targets that share features with distractors in a visual array and concluded that people with ASD showed enhanced performance on visual search tasks. However, methodological limitations, the small sample sizes, and the lack of developmental analysis have tempered the interpretations of these results. In this study, we specifically addressed age-related changes in visual search. We examined conjunctive visual search in groups of children with (n = 34) and without ASD (n = 35) at 7-9 years of age when visual search performance is beginning to improve, and later, at 10-12 years, when performance has improved. The results were consistent with previous developmental findings; 10- to 12-year-old children were significantly faster visual searchers than their 7- to 9-year-old counterparts. However, we found no evidence of enhanced search performance among the children with ASD at either the younger or older ages. More research is needed to understand the development of visual search in both children with and without ASD. © 2014 International Society for Autism Research, Wiley Periodicals, Inc.
Motivation and short-term memory in visual search: Attention's accelerator revisited.
Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton
2018-05-01
A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visalli, Antonino; Vallesi, Antonino
2018-01-01
Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target) while in the other block the target was a Smurfette doll (neutral target). The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages. PMID:29497392
Color coding of control room displays: the psychocartography of visual layering effects.
Van Laar, Darren; Deshe, Ofer
2007-06-01
To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).
Malavita, Menaka S; Vidyasagar, Trichur R; McKendrick, Allison M
2017-02-01
The purpose of this study was to study how, in midperipheral vision, aging affects visual processes that interfere with target detection (crowding and surround suppression) and to determine whether the performance on such tasks are related to visuospatial attention as measured by visual search. We investigated the effect of aging on crowding and suppression in detection of a target in peripheral vision, using different types of flanking stimuli. Both thresholds were also obtained while varying the position of the flanker (placed inside or outside of target, relative to fixation). Crowding thresholds were also estimated with spatial uncertainty (jitter). Additionally, we included a visual search task comprising Gabor stimuli to investigate whether performance is related to top-down attention. Twenty young adults (age, 18-32 years; mean age, 26.1 years; 10 males) and 19 older adults (age, 60-74 years; mean age, 70.3 years; 10 males) participated in the study. Older adults showed more surround suppression than the young (F[1,37] = 4.21; P < 0.05), but crowding was unaffected by age. In the younger group, the position of the flanker influenced the strength of crowding, but not the strength of suppression (F[1,39] = 4.11; P < 0.05). Crowding was not affected by spatial jitter of the stimuli. Neither crowding nor surround suppression was predicted by attentional efficiency measured in the visual search task. There was also no significant correlation between crowding and surround suppression. We show that aging does not affect visual crowding but does increase surround suppression of contrast, suggesting that crowding and surround suppression may be distinct visual phenomena. Furthermore, strengths of crowding and surround suppression did not correlate with each other nor could they be predicted by efficiency of visual search.
Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.
Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min
2013-12-01
Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.
Serial vs. parallel models of attention in visual search: accounting for benchmark RT-distributions.
Moran, Rani; Zehetleitner, Michael; Liesefeld, Heinrich René; Müller, Hermann J; Usher, Marius
2016-10-01
Visual search is central to the investigation of selective visual attention. Classical theories propose that items are identified by serially deploying focal attention to their locations. While this accounts for set-size effects over a continuum of task difficulties, it has been suggested that parallel models can account for such effects equally well. We compared the serial Competitive Guided Search model with a parallel model in their ability to account for RT distributions and error rates from a large visual search data-set featuring three classical search tasks: 1) a spatial configuration search (2 vs. 5); 2) a feature-conjunction search; and 3) a unique feature search (Wolfe, Palmer & Horowitz Vision Research, 50(14), 1304-1311, 2010). In the parallel model, each item is represented by a diffusion to two boundaries (target-present/absent); the search corresponds to a parallel race between these diffusors. The parallel model was highly flexible in that it allowed both for a parametric range of capacity-limitation and for set-size adjustments of identification boundaries. Furthermore, a quit unit allowed for a continuum of search-quitting policies when the target is not found, with "single-item inspection" and exhaustive searches comprising its extremes. The serial model was found to be superior to the parallel model, even before penalizing the parallel model for its increased complexity. We discuss the implications of the results and the need for future studies to resolve the debate.
[Eccentricity-dependent influence of amodal completion on visual search].
Shirama, Aya; Ishiguchi, Akira
2009-06-01
Does amodal completion occur homogeneously across the visual field? Rensink and Enns (1998) found that visual search for efficiently-detected fragments became inefficient when observers perceived the fragments as a partially-occluded version of a distractor due to a rapid completion process. We examined the effect of target eccentricity in Rensink and Enns's tasks and a few additional tasks by magnifying the stimuli in the peripheral visual field to compensate for the loss of spatial resolution (M-scaling; Rovamo & Virsu, 1979). We found that amodal completion disrupted the efficient search for the salient fragments (i.e., target) even when the target was presented at high eccentricity (within 17 deg). In addition, the configuration effect of the fragments, which produced amodal completion, increased with eccentricity while the same target was detected efficiently at the lowest eccentricity. This eccentricity effect is different from a previously-reported eccentricity effect where M-scaling was effective (Carrasco & Frieder, 1997). These findings indicate that the visual system has a basis for rapid completion across the visual field, but the stimulus representations constructed through amodal completion have eccentricity-dependent properties.
Overcoming hurdles in translating visual search research between the lab and the field.
Clark, Kait; Cain, Matthew S; Adamo, Stephen H; Mitroff, Stephen R
2012-01-01
Research in visual search can be vital to improving performance in careers such as radiology and airport security screening. In these applied, or "field," searches, accuracy is critical, and misses are potentially fatal; however, despite the importance of performing optimally, radiological and airport security searches are nevertheless flawed. Extensive basic research in visual search has revealed cognitive mechanisms responsible for successful visual search as well as a variety of factors that tend to inhibit or improve performance. Ideally, the knowledge gained from such laboratory-based research could be directly applied to field searches, but several obstacles stand in the way of straightforward translation; the tightly controlled visual searches performed in the lab can be drastically different from field searches. For example, they can differ in terms of the nature of the stimuli, the environment in which the search is taking place, and the experience and characteristics of the searchers themselves. The goal of this chapter is to discuss these differences and how they can present hurdles to translating lab-based research to field-based searches. Specifically, most search tasks in the lab entail searching for only one target per trial, and the targets occur relatively frequently, but field searches may contain an unknown and unlimited number of targets, and the occurrence of targets can be rare. Additionally, participants in lab-based search experiments often perform under neutral conditions and have no formal training or experience in search tasks; conversely, career searchers may be influenced by the motivation to perform well or anxiety about missing a target, and they have undergone formal training and accumulated significant experience searching. This chapter discusses recent work that has investigated the impacts of these differences to determine how each factor can influence search performance. Knowledge gained from the scientific exploration of search can be applied to field searches but only when considering and controlling for the differences between lab and field.
Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P
2014-03-01
Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The Comparison of Visual Working Memory Representations with Perceptual Inputs
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew
2008-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755
ERIC Educational Resources Information Center
Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan
2006-01-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…
Thinking in Pictures as a cognitive account of autism.
Kunda, Maithilee; Goel, Ashok K
2011-09-01
We analyze the hypothesis that some individuals on the autism spectrum may use visual mental representations and processes to perform certain tasks that typically developing individuals perform verbally. We present a framework for interpreting empirical evidence related to this "Thinking in Pictures" hypothesis and then provide comprehensive reviews of data from several different cognitive tasks, including the n-back task, serial recall, dual task studies, Raven's Progressive Matrices, semantic processing, false belief tasks, visual search, spatial recall, and visual recall. We also discuss the relationships between the Thinking in Pictures hypothesis and other cognitive theories of autism including Mindblindness, Executive Dysfunction, Weak Central Coherence, and Enhanced Perceptual Functioning.
Effects of speech intelligibility level on concurrent visual task performance.
Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J
1994-09-01
Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.
Visual search and attention: an overview.
Davis, Elizabeth T; Palmer, John
2004-01-01
This special feature issue is devoted to attention and visual search. Attention is a central topic in psychology and visual search is both a versatile paradigm for the study of visual attention and a topic of study in itself. Visual search depends on sensory, perceptual, and cognitive processes. As a result, the search paradigm has been used to investigate a diverse range of phenomena. Manipulating the search task can vary the demands on attention. In turn, attention modulates visual search by selecting and limiting the information available at various levels of processing. Focusing on the intersection of attention and search provides a relatively structured window into the wide world of attentional phenomena. In particular, the effects of divided attention are illustrated by the effects of set size (the number of stimuli in a display) and the effects of selective attention are illustrated by cueing subsets of stimuli within the display. These two phenomena provide the starting point for the articles in this special issue. The articles are organized into four general topics to help structure the issues of attention and search.
Rolke, Bettina; Festl, Freya; Seibold, Verena C
2016-11-01
We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.
Survival Processing Enhances Visual Search Efficiency.
Cho, Kit W
2018-05-01
Words rated for their survival relevance are remembered better than when rated using other well-known memory mnemonics. This finding, which is known as the survival advantage effect and has been replicated in many studies, suggests that our memory systems are molded by natural selection pressures. In two experiments, the present study used a visual search task to examine whether there is likewise a survival advantage for our visual systems. Participants rated words for their survival relevance or for their pleasantness before locating that object's picture in a search array with 8 or 16 objects. Although there was no difference in search times among the two rating scenarios when set size was 8, survival processing reduced visual search times when set size was 16. These findings reflect a search efficiency effect and suggest that similar to our memory systems, our visual systems are also tuned toward self-preservation.
Threat captures attention but does not affect learning of contextual regularities.
Yamaguchi, Motonori; Harwood, Sarah L
2017-04-01
Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.
Attention-based long-lasting sensitization and suppression of colors.
Tseng, Chia-Huei; Vidnyanszky, Zoltan; Papathomas, Thomas; Sperling, George
2010-02-22
In contrast to the short-duration and quick reversibility of attention, a long-term sensitization to color based on protracted attention in a visual search task was reported by Tseng, Gobell, and Sperling (2004). When subjects were trained for a few hours to search for a red object among colored distracters, sensitivity to red was increased for weeks. This sensitization was quantified using ambiguous motion displays containing isoluminant red-green and texture-contrast gratings, in which the perceived motion-direction depended both on the attended color and on the relative red-green saturation. Such long-term effects could result from either sensitization of the attended color, or suppression of unattended colors, or a combination of the two. Here we unconfound these effects by eliminating one of the paired colors of the motion display from the search task. The other paired color in the motion display can then be either a target or a distracter in the search task. Thereby, we separately measure the effect of attention on sensitizing the target color or suppressing distracter colors. The results indicate that only sensitization of the target color in the search task is statistically significant for the present experimental conditions. We conclude that selective attention to a color in our visual search task caused long-term sensitization to the attended color but not significant long-term suppression of the unattended color. Copyright 2009 Elsevier Ltd. All rights reserved.
Perceptual load corresponds with factors known to influence visual search
Roper, Zachary J. J.; Cosman, Joshua D.; Vecera, Shaun P.
2014-01-01
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search. PMID:23398258
Brehmer, Matthew; Ingram, Stephen; Stray, Jonathan; Munzner, Tamara
2014-12-01
For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview, an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system "in the wild", and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of "exploring" a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology.
Habitual attention in older and young adults.
Jiang, Yuhong V; Koutstaal, Wilma; Twedell, Emily L
2016-12-01
Age-related decline is pervasive in tasks that require explicit learning and memory, but such reduced function is not universally observed in tasks involving incidental learning. It is unknown if habitual attention, involving incidental probabilistic learning, is preserved in older adults. Previous research on habitual attention investigated contextual cuing in young and older adults, yet contextual cuing relies not only on spatial attention but also on context processing. Here we isolated habitual attention from context processing in young and older adults. Using a challenging visual search task in which the probability of finding targets was greater in 1 of 4 visual quadrants in all contexts, we examined the acquisition, persistence, and spatial-reference frame of habitual attention. Although older adults showed slower visual search times and steeper search slopes (more time per additional item in the search display), like young adults they rapidly acquired a strong, persistent search habit toward the high-probability quadrant. In addition, habitual attention was strongly viewer-centered in both young and older adults. The demonstration of preserved viewer-centered habitual attention in older adults suggests that it may be used to counter declines in controlled attention. This, in turn, suggests the importance, for older adults, of maintaining habit-related spatial arrangements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Visual Search Asymmetries within Color-Coded and Intensity-Coded Displays
ERIC Educational Resources Information Center
Yamani, Yusuke; McCarley, Jason S.
2010-01-01
Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information.…
Dynamic Prototypicality Effects in Visual Search
ERIC Educational Resources Information Center
Kayaert, Greet; Op de Beeck, Hans P.; Wagemans, Johan
2011-01-01
In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these…
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Visual grouping under isoluminant condition: impact of mental fatigue
NASA Astrophysics Data System (ADS)
Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta
2016-09-01
Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Lévy-like diffusion in eye movements during spoken-language comprehension.
Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Lévy-like diffusion in eye movements during spoken-language comprehension
NASA Astrophysics Data System (ADS)
Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Sheridan, Heather; Reingold, Eyal M
2017-03-01
To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.
The Mechanisms Underlying the ASD Advantage in Visual Search.
Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik
2016-05-01
A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests.
Aging affects the balance between goal-guided and habitual spatial attention.
Twedell, Emily L; Koutstaal, Wilma; Jiang, Yuhong V
2017-08-01
Visual clutter imposes significant challenges to older adults in everyday tasks and often calls on selective processing of relevant information. Previous research has shown that both visual search habits and task goals influence older adults' allocation of spatial attention, but has not examined the relative impact of these two sources of attention when they compete. To examine how aging affects the balance between goal-driven and habitual attention, and to inform our understanding of different attentional subsystems, we tested young and older adults in an adapted visual search task involving a display laid flat on a desk. To induce habitual attention, unbeknownst to participants, the target was more often placed in one quadrant than in the others. All participants rapidly acquired habitual attention toward the high-probability quadrant. We then informed participants where the high-probability quadrant was and instructed them to search that screen location first-but pitted their habit-based, viewer-centered search against this instruction by requiring participants to change their physical position relative to the desk. Both groups prioritized search in the instructed location, but this effect was stronger in young adults than in older adults. In contrast, age did not influence viewer-centered search habits: the two groups showed similar attentional preference for the visual field where the target was most often found before. Aging disrupted goal-guided but not habitual attention. Product, work, and home design for people of all ages--but especially for older individuals--should take into account the strong viewer-centered nature of habitual attention.
Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.
Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli
2018-06-08
Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.
Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei
2016-03-09
Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.
Loughman, James; Davison, Peter; Flitcroft, Ian
2007-11-01
Preattentive visual search (PAVS) describes rapid and efficient retinal and neural processing capable of immediate target detection in the visual field. Damage to the nerve fibre layer or visual pathway might reduce the efficiency with which the visual system performs such analysis. The purpose of this study was to test the hypothesis that patients with glaucoma are impaired on parallel search tasks, and that this would serve to distinguish glaucoma in early cases. Three groups of observers (glaucoma patients, suspect and normal individuals) were examined, using computer-generated flicker, orientation, and vertical motion displacement targets to assess PAVS efficiency. The task required rapid and accurate localisation of a singularity embedded in a field of 119 homogeneous distractors on either the left or right-hand side of a computer monitor. All subjects also completed a choice reaction time (CRT) task. Independent sample T tests revealed PAVS efficiency to be significantly impaired in the glaucoma group compared with both normal and suspect individuals. Performance was impaired in all types of glaucoma tested. Analysis between normal and suspect individuals revealed a significant difference only for motion displacement response times. Similar analysis using a PAVS/CRT index confirmed the glaucoma findings but also showed statistically significant differences between suspect and normal individuals across all target types. A test of PAVS efficiency appears capable of differentiating early glaucoma from both normal and suspect cases. Analysis incorporating a PAVS/CRT index enhances the diagnostic capacity to differentiate normal from suspect cases.
Visual Search with Image Modification in Age-Related Macular Degeneration
Wiecek, Emily; Jackson, Mary Lou; Dakin, Steven C.; Bex, Peter
2012-01-01
Purpose. AMD results in loss of central vision and a dependence on low-resolution peripheral vision. While many image enhancement techniques have been proposed, there is a lack of quantitative comparison of the effectiveness of enhancement. We developed a natural visual search task that uses patients' eye movements as a quantitative and functional measure of the efficacy of image modification. Methods. Eye movements of 17 patients (mean age = 77 years) with AMD were recorded while they searched for target objects in natural images. Eight different image modification methods were implemented and included manipulations of local image or edge contrast, color, and crowding. In a subsequent task, patients ranked their preference of the image modifications. Results. Within individual participants, there was no significant difference in search duration or accuracy across eight different image manipulations. When data were collapsed across all image modifications, a multivariate model identified six significant predictors for normalized search duration including scotoma size and acuity, as well as interactions among scotoma size, age, acuity, and contrast (P < 0.05). Additionally, an analysis of image statistics showed no correlation with search performance across all image modifications. Rank ordering of enhancement methods based on participants' preference revealed a trend that participants preferred the least modified images (P < 0.05). Conclusions. There was no quantitative effect of image modification on search performance. A better understanding of low- and high-level components of visual search in natural scenes is necessary to improve future attempts at image enhancement for low vision patients. Different search tasks may require alternative image modifications to improve patient functioning and performance. PMID:22930725
Visual working memory simultaneously guides facilitation and inhibition during visual search.
Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem
2016-07-01
During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention.
Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task
Chia, Jingyi S.; Burns, Stephen F.; Barrett, Laura A.; Chow, Jia Y.
2017-01-01
The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns), the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled (n = 12) and less skilled (n = 12) participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE) duration (indicative of superior sports performance) however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of badminton relating to deception and the implications of interpreting visual behavior of players. PMID:28659850
Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task.
Chia, Jingyi S; Burns, Stephen F; Barrett, Laura A; Chow, Jia Y
2017-01-01
The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns), the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled ( n = 12) and less skilled ( n = 12) participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE) duration (indicative of superior sports performance) however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of badminton relating to deception and the implications of interpreting visual behavior of players.
Visual search in scenes involves selective and non-selective pathways
Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R
2010-01-01
How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734
Zhang, Qiong; Shi, Jiannong; Luo, Yuejia; Zhao, Daheng; Yang, Jie
2006-05-15
To investigate the differences in event-related potential parameters related to children's intelligence, we selected 15 individuals from an experimental class of intellectually gifted children and 13 intellectually average children as control to finish three types of visual search tasks (Chinese words, English letters and Arabic numbers). We recorded the electroencephalogram and calculated the peak latencies and amplitudes. Our results suggest comparatively increased P3 amplitudes and shorter P3 latencies in brighter individuals than in less intelligent individuals, but this expected neural efficiency effect interacted with task content. The differences were explained by a more spatially and temporally coordinated neural network for more intelligent children.
Contextual cueing of pop-out visual search: when context guides the deployment of attention.
Geyer, Thomas; Zehetleitner, Michael; Müller, Hermann J
2010-05-01
Visual context information can guide attention in demanding (i.e., inefficient) search tasks. When participants are repeatedly presented with identically arranged ('repeated') displays, reaction times are faster relative to newly composed ('non-repeated') displays. The present article examines whether this 'contextual cueing' effect operates also in simple (i.e., efficient) search tasks and if so, whether there it influences target, rather than response, selection. The results were that singleton-feature targets were detected faster when the search items were presented in repeated, rather than non-repeated, arrangements. Importantly, repeated, relative to novel, displays also led to an increase in signal detection accuracy. Thus, contextual cueing can expedite the selection of pop-out targets, most likely by enhancing feature contrast signals at the overall-salience computation stage.
Search guidance is proportional to the categorical specificity of a target cue.
Schmidt, Joseph; Zelinsky, Gregory J
2009-10-01
Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.
Torrens-Burton, Anna; Basoudan, Nasreen; Bayer, Antony J; Tales, Andrea
2017-01-01
This study examines the relationships between two measures of information processing speed associated with executive function (Trail Making Test and a computer-based visual search test), the perceived difficulty of the tasks, and perceived memory function (measured by the Memory Functioning Questionnaire) in older adults (aged 50+ y) with normal general health, cognition (Montreal Cognitive Assessment score of 26+), and mood. The participants were recruited from the community rather than through clinical services, and none had ever sought or received help from a health professional for a memory complaint or mental health problem. For both the trail making and the visual search tests, mean information processing speed was not correlated significantly with perceived memory function. Some individuals did, however, reveal substantially slower information processing speeds (outliers) that may have clinical significance and indicate those who may benefit most from further assessment and follow up. For the trail making, but not the visual search task, higher levels of subjective memory dysfunction were associated with a greater perception of task difficulty. The relationship between actual information processing speed and perceived task difficulty also varied with respect to the task used. These findings highlight the importance of taking into account the type of task and metacognition factors when examining the integrity of information processing speed in older adults, particularly as this measure is now specifically cited as a key cognitive subdomain within the diagnostic framework for neurocognitive disorders.
Torrens-Burton, Anna; Basoudan, Nasreen; Bayer, Antony J.; Tales, Andrea
2017-01-01
This study examines the relationships between two measures of information processing speed associated with executive function (Trail Making Test and a computer-based visual search test), the perceived difficulty of the tasks, and perceived memory function (measured by the Memory Functioning Questionnaire) in older adults (aged 50+ y) with normal general health, cognition (Montreal Cognitive Assessment score of 26+), and mood. The participants were recruited from the community rather than through clinical services, and none had ever sought or received help from a health professional for a memory complaint or mental health problem. For both the trail making and the visual search tests, mean information processing speed was not correlated significantly with perceived memory function. Some individuals did, however, reveal substantially slower information processing speeds (outliers) that may have clinical significance and indicate those who may benefit most from further assessment and follow up. For the trail making, but not the visual search task, higher levels of subjective memory dysfunction were associated with a greater perception of task difficulty. The relationship between actual information processing speed and perceived task difficulty also varied with respect to the task used. These findings highlight the importance of taking into account the type of task and metacognition factors when examining the integrity of information processing speed in older adults, particularly as this measure is now specifically cited as a key cognitive subdomain within the diagnostic framework for neurocognitive disorders. PMID:28984584
Oculomotor Evidence for Top-Down Control following the Initial Saccade
Siebold, Alisha; van Zoest, Wieske; Donk, Mieke
2011-01-01
The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter. PMID:21931603
Does Central Vision Loss Impair Visual Search Performance of Adults More than Children?
Satgunam, PremNandhini; Luo, Gang
2018-05-01
In general, young adults with normal vision show the best visual search performance when compared with children and older adults. Through our study, we show that this trend is not observed in individuals with vision impairment. An interaction effect of vision impairment with visual development and aging is observed. Performance in many visual tasks typically shows improvement with age until young adulthood and then declines with aging. Using a visual search task, this study investigated whether a similar age effect on performance is present in people with central vision loss. A total of 98 participants, 37 with normal sight (NS) and 61 with visual impairment (VI) searched for targets in 150 real-world digital images. Search performance was quantified by an integrated measure combining speed and accuracy. Participant ages ranged from 5 to 74 years, visual acuity from -0.14 (20/14.5) to 1.16 logMAR (20/290), and log contrast sensitivity (CS) from 0.48 to 2.0. Data analysis was performed with participants divided into three age groups: children (aged <14 years, n = 25), young adults (aged 14 to 45 years, n = 47), and older adults (aged >45 years, n = 26). Regression (r = 0.7) revealed CS (P < .001) and age (P = .003) were significant predictors of search performance. Performance of VI participants was normalized to the age-matched average performance of the NS group. In the VI group, it was found that children's normalized performance (52%) was better than both young (39%, P = .05) and older (40%, P = .048) adults. Unlike NS participants, young adults in the VI group may not have search ability superior to children with VI, despite having the same level of visual functions (quantified by visual acuity and CS). This could be because of vision impairment limiting the developmental acquisition of the age dividend for peak performance. Older adults in the VI group had the worst performance, indicating an interaction of aging.
Pupil measures of alertness and mental load
NASA Technical Reports Server (NTRS)
Backs, Richard W.; Walrath, Larry C.
1988-01-01
A study of eight adults given active and passive search tasks showed that evoked pupillary response was sensitive to information processing demands. In particular, large pupillary diameter was observed in the active search condition where subjects were actively processing information relevant to task performance, as opposed to the passive search (control) condition where subjects passively viewed the displays. However, subjects may have simply been more aroused in the active search task. Of greater importance was that larger pupillary diameter, corresponding to longer search time, was observed for noncoded than for color-coded displays in active search. In the control condition, pupil diameter was larger with the color displays. The data indicate potential usefulness of pupillary responses in evaluating the information processing requirements of visual displays.
Negative emotional stimuli reduce contextual cueing but not response times in inefficient search.
Kunar, Melina A; Watson, Derrick G; Cole, Louise; Cox, Angeline
2014-02-01
In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search.
Poole, Bradley J; Kane, Michael J
2009-07-01
Variation in working-memory capacity (WMC) predicts individual differences in only some attention-control capabilities. Whereas higher WMC subjects outperform lower WMC subjects in tasks requiring the restraint of prepotent but inappropriate responses, and the constraint of attentional focus to target stimuli against distractors, they do not differ in prototypical visual-search tasks, even those that yield steep search slopes and engender top-down control. The present three experiments tested whether WMC, as measured by complex memory span tasks, would predict search latencies when the 1-8 target locations to be searched appeared alone, versus appearing among distractor locations to be ignored, with the latter requiring selective attentional focus. Subjects viewed target-location cues and then fixated on those locations over either long (1,500-1,550 ms) or short (300 ms) delays. Higher WMC subjects identified targets faster than did lower WMC subjects only in the presence of distractors and only over long fixation delays. WMC thus appears to affect subjects' ability to maintain a constrained attentional focus over time.
Chan, Louis K H; Hayward, William G
2009-02-01
In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed results are difficult to explain in its absence. The present study measured dimension-specific performance during detection and localization, tasks that require operation of dimensional modules and the master map, respectively. Results showed a dissociation between tasks in terms of both dimension-switching costs and cross-dimension attentional capture, reflecting a dimension-specific nature for detection tasks and a dimension-general nature for localization tasks. In a feature-discrimination task, results precluded an explanation based on response mode. These results are interpreted to support FIT's postulation that different mechanisms are involved in parallel and focal attention searches. This indicates that the FIT architecture should be adopted to explain the current results and that a variety of visual attention findings can be addressed within this framework. Copyright 2009 APA, all rights reserved.
Qin, Xiaoyan Angela; Koutstaal, Wilma; Engel, Stephen A
2014-05-01
Familiar items are found faster than unfamiliar ones in visual search tasks. This effect has important implications for cognitive theory, because it may reveal how mental representations of commonly encountered items are changed by experience to optimize performance. It remains unknown, however, whether everyday items with moderate levels of exposure would show benefits in visual search, and if so, what kind of experience would be required to produce them. Here, we tested whether familiar product logos were searched for faster than unfamiliar ones, and also familiarized subjects with previously unfamiliar logos. Subjects searched for preexperimentally familiar and unfamiliar logos, half of which were familiarized in the laboratory, amongst other, unfamiliar distractor logos. In three experiments, we used an N-back-like familiarization task, and in four others we used a task that asked detailed questions about the perceptual aspects of the logos. The number of familiarization exposures ranged from 30 to 84 per logo across experiments, with two experiments involving across-day familiarization. Preexperimentally familiar target logos were searched for faster than were unfamiliar, nonfamiliarized logos, by 8 % on average. This difference was reliable in all seven experiments. However, familiarization had little or no effect on search speeds; its average effect was to improve search times by 0.7 %, and its effect was significant in only one of the seven experiments. If priming, mere exposure, episodic memory, or relatively modest familiarity were responsible for familiarity's effects on search, then performance should have improved following familiarization. Our results suggest that the search-related advantage of familiar logos does not develop easily or rapidly.
Attentional Predictors of 5-month-olds’ Performance on a Looking A-not-B Task†
Marcovitch, Stuart; Clearfield, Melissa W.; Swingler, Margaret; Calkins, Susan D.; Bell, Martha Ann
2015-01-01
In the first year of life, the ability to search for hidden objects is an indicator of object permanence and, when multiple locations are involved, executive function (i.e. inhibition, cognitive flexibility and working memory). The current study was designed to examine attentional predictors of search in 5-month-old infants (as measured by the looking A-not-B task), and whether levels of maternal education moderated the effect of the predictors. Specifically, in a separate task, the infants were shown a unique puppet, and we measured the percentage of time attending to the puppet, as well as the length of the longest look (i.e., peak fixation) directed towards the puppet. Across the entire sample (N =390), the percentage of time attending to the puppet was positively related to performance on the visual A-not-B task. However, for infants whose mothers had not completed college, having a shorter peak looking time (after controlling for percentage of time) was also a predictor of visual A-not-B performance. The role of attention, peak fixation and maternal education in visual search is discussed. PMID:27642263
Conjunctive visual search in individuals with and without mental retardation.
Carlin, Michael; Chrysler, Christina; Sullivan, Kate
2007-01-01
A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our goal was to determine the effect of decreasing target-distracter disparities on visual search efficiency. Results showed that search rates for the group with mental retardation and the MA-matched comparisons were more negatively affected by decreasing disparities than were those of the CA-matched group. The group with mental retardation and the MA-matched group performed similarly on all tasks. Implications for theory and application are discussed.
The evaluation of display symbology - A chronometric study of visual search. [on cathode ray tubes
NASA Technical Reports Server (NTRS)
Remington, R.; Williams, D.
1984-01-01
Three single-target visual search tasks were used to evaluate a set of CRT symbols for a helicopter traffic display. The search tasks were representative of the kinds of information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. The results show that familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols such as a nearby flashing dot or surrounding square had a greater disruptive effect on the graphic symbols than the alphanumeric characters. The results suggest that a symbol set is like a list that must be learned. Factors that affect the time to respond to items in a list, such as familiarity and visual discriminability, and the division of list items into categories, also affect the time to identify symbols.
Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets
Morvan, Camille; Maloney, Laurence T.
2012-01-01
Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. PMID:22319428
Selective attention in anxiety: distraction and enhancement in visual search.
Rinck, Mike; Becker, Eni S; Kellermann, Jana; Roth, Walton T
2003-01-01
According to cognitive models of anxiety, anxiety patients exhibit an attentional bias towards threat, manifested as greater distractibility by threat stimuli and enhanced detection of them. Both phenomena were studied in two experiments, using a modified visual search task, in which participants were asked to find single target words (GAD-related, speech-related, neutral, or positive) hidden in matrices made up of distractor words (also GAD-related, speech-related, neutral, or positive). Generalized anxiety disorder (GAD) patients, social phobia (SP) patients afraid of giving speeches, and healthy controls participated in the visual search task. GAD patients were slowed by GAD-related distractor words but did not show statistically reliable evidence of enhanced detection of GAD-related target words. SP patients showed neither distraction nor enhancement effects. These results extend previous findings of attentional biases observed with other experimental paradigms. Copyright 2003 Wiley-Liss, Inc.
Statistical patterns of visual search for hidden objects
Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.
2012-01-01
The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829
Reward associations impact both iconic and visual working memory.
Infanti, Elisa; Hickey, Clayton; Turatto, Massimo
2015-02-01
Reward plays a fundamental role in human behavior. A growing number of studies have shown that stimuli associated with reward become salient and attract attention. The aim of the present study was to extend these results into the investigation of iconic memory and visual working memory. In two experiments we asked participants to perform a visual-search task where different colors of the target stimuli were paired with high or low reward. We then tested whether the pre-established feature-reward association affected performance on a subsequent visual memory task, in which no reward was provided. In this test phase participants viewed arrays of 8 objects, one of which had unique color that could match the color associated with reward during the previous visual-search task. A probe appeared at varying intervals after stimulus offset to identify the to-be-reported item. Our results suggest that reward biases the encoding of visual information such that items characterized by a reward-associated feature interfere with mnemonic representations of other items in the test display. These results extend current knowledge regarding the influence of reward on early cognitive processes, suggesting that feature-reward associations automatically interact with the encoding and storage of visual information, both in iconic memory and visual working memory. Copyright © 2014 Elsevier Ltd. All rights reserved.
The acute effects of cocoa flavanols on temporal and spatial attention.
Karabay, Aytaç; Saija, Jefta D; Field, David T; Akyürek, Elkan G
2018-05-01
In this study, we investigated how the acute physiological effects of cocoa flavanols might result in specific cognitive changes, in particular in temporal and spatial attention. To this end, we pre-registered and implemented a randomized, double-blind, placebo- and baseline-controlled crossover design. A sample of 48 university students participated in the study and each of them completed the experimental tasks in four conditions (baseline, placebo, low dose, and high-dose flavanol), administered in separate sessions with a 1-week washout interval. A rapid serial visual presentation task was used to test flavanol effects on temporal attention and integration, and a visual search task was similarly employed to investigate spatial attention. Results indicated that cocoa flavanols improved visual search efficiency, reflected by reduced reaction time. However, cocoa flavanols did not facilitate temporal attention nor integration, suggesting that flavanols may affect some aspects of attention, but not others. Potential underlying mechanisms are discussed.
Low target prevalence is a stubborn source of errors in visual search tasks
Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour
2009-01-01
In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575
Temporal production and visuospatial processing.
Benuzzi, Francesca; Basso, Gianpaolo; Nichelli, Paolo
2005-12-01
Current models of prospective timing hypothesize that estimated duration is influenced either by the attentional load or by the short-term memory requirements of a concurrent nontemporal task. In the present study, we addressed this issue with four dual-task experiments. In Exp. 1, the effect of memory load on both reaction time and temporal production was proportional to the number of items of a visuospatial pattern to hold in memory. In Exps. 2, 3, and 4, a temporal production task was combined with two visual search tasks involving either pre-attentive or attentional processing. Visual tasks interfered with temporal production: produced intervals were lengthened proportionally to the display size. In contrast, reaction times increased with display size only when a serial, effortful search was required. It appears that memory and perceptual set size, rather than nonspecific attentional or short-term memory load, can influence prospective timing.
The remains of the trial: goal-determined inter-trial suppression of selective attention.
Lleras, Alejandro; Levinthal, Brian R; Kawahara, Jun
2009-01-01
When an observer is searching through the environment for a target, what are the consequences of not finding a target in a given environment? We examine this issue in detail and propose that the visual system systematically tags environmental information during a search, in an effort to improve performance in future search events. Information that led to search successes is positively tagged, so as to favor future deployments of attention toward that type of information, whereas information that led to search failures is negatively tagged, so as to discourage future deployments of attention toward such failed information. To study this, we use an oddball-search task, where participants search for one item that differs from all others along one feature or belongs to a different visual category, from the other stimuli in the display. We find that when participants perform oddball-search tasks, the absence of a target delays identification of future targets containing the feature or category that was shared by all distractors in the target-absent trial. We interpret this effect as reflecting an implicit assessment of performance: target-absent trials can be viewed as processing "failures" insofar as they do not provide the visual system with the information needed to complete the task. Here, we study the goal-oriented nature of this bias in three ways. First, we show that the direction of the bias is determined by the experimental task. Second, we show that the effect is independent of the mode of presentation of stimuli: it happens with both serial and simultaneous stimuli presentation. Third, we show that, when using categorically defined oddballs as the search stimuli (find the face among houses or vice versa), the bias generalizes to unseen members of the "failed" category. Together, these findings support the idea that this inter-trial attentional biases arise from high-level, task-constrained, implicit assessments of performance, involving categorical associations between classes of stimuli and behavioral outcomes (success/failure), which are independent of attentional modality (temporal vs. spatial attention).
[Internet search for counseling offers for older adults suffering from visual impairment].
Himmelsbach, I; Lipinski, J; Putzke, M
2016-11-01
Visual impairment is a relevant problem of aging. In many cases promising therapeutic options exist but patients are often left with visual deficits, which require a high degree of individualized counseling. This article analyzed which counseling offers can be found by patients and relatives using simple and routine searching via the internet. Analyses were performed using colloquial search terms in the search engine Google in order to find counseling options for elderly people with visual impairments available via the internet. With this strategy 189 offers for counseling were found, which showed very heterogeneous regional differences in distribution. The counseling options found in the internet commonly address topics such as therapeutic interventions or topics on visual aids corresponding to the professions offering rehabilitation most present in the internet, such as ophthalmologists and opticians. Regarding contents addressing psychosocial and help in daily tasks, self-help and support groups offer the most differentiated and broadest spectrum. Support offers for daily living tasks and psychosocial counseling from social providers were more difficult to find with these search terms despite a high presence in the internet. There are a large number of providers of counseling and consulting for older persons with visual impairment. In order to be found more easily by patients and to be recommended more often by ophthalmologists and general practitioners, the presence of providers in the internet must be improved, especially providers of daily living and psychosocial support offers.
Evidence of different underlying processes in pattern recall and decision-making.
Gorman, Adam D; Abernethy, Bruce; Farrow, Damian
2015-01-01
The visual search characteristics of expert and novice basketball players were recorded during pattern recall and decision-making tasks to determine whether the two tasks shared common visual-perceptual processing strategies. The order in which participants entered the pattern elements in the recall task was also analysed to further examine the nature of the visual-perceptual strategies and the relative emphasis placed upon particular pattern features. The experts demonstrated superior performance across the recall and decision-making tasks [see also Gorman, A. D., Abernethy, B., & Farrow, D. (2012). Classical pattern recall tests and the prospective nature of expert performance. The Quarterly Journal of Experimental Psychology, 65, 1151-1160; Gorman, A. D., Abernethy, B., & Farrow, D. (2013a). Is the relationship between pattern recall and decision-making influenced by anticipatory recall? The Quarterly Journal of Experimental Psychology, 66, 2219-2236)] but a number of significant differences in the visual search data highlighted disparities in the processing strategies, suggesting that recall skill may utilize different underlying visual-perceptual processes than those required for accurate decision-making performance in the natural setting. Performance on the recall task was characterized by a proximal-to-distal order of entry of the pattern elements with participants tending to enter the players located closest to the ball carrier earlier than those located more distal to the ball carrier. The results provide further evidence of the underlying perceptual processes employed by experts when extracting visual information from complex and dynamic patterns.
Perceptual load corresponds with factors known to influence visual search.
Roper, Zachary J J; Cosman, Joshua D; Vecera, Shaun P
2013-10-01
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a noncircular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spillover to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. We conclude that rather than be arbitrarily defined, perceptual load might be defined by well-characterized, continuous factors that influence visual search. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Nakashima, Ryoichi; Shioiri, Satoshi
2014-01-01
Why do we frequently fixate an object of interest presented peripherally by moving our head as well as our eyes, even when we are capable of fixating the object with an eye movement alone (lateral viewing)? Studies of eye-head coordination for gaze shifts have suggested that the degree of eye-head coupling could be determined by an unconscious weighing of the motor costs and benefits of executing a head movement. The present study investigated visual perceptual effects of head direction as an additional factor impacting on a cost-benefit organization of eye-head control. Three experiments using visual search tasks were conducted, manipulating eye direction relative to head orientation (front or lateral viewing). Results show that lateral viewing increased the time required to detect a target in a search for the letter T among letter L distractors, a serial attentive search task, but not in a search for T among letter O distractors, a parallel preattentive search task (Experiment 1). The interference could not be attributed to either a deleterious effect of lateral gaze on the accuracy of saccadic eye movements, nor to potentially problematic optical effects of binocular lateral viewing, because effect of head directions was obtained under conditions in which the task was accomplished without saccades (Experiment 2), and during monocular viewing (Experiment 3). These results suggest that a difference between the head and eye directions interferes with visual processing, and that the interference can be explained by the modulation of attention by the relative positions of the eyes and head (or head direction). PMID:24647634
An Empirical Study on Using Visual Embellishments in Visualization.
Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min
2012-12-01
In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.
Action Planning Mediates Guidance of Visual Attention from Working Memory.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2015-01-01
Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.
Action Planning Mediates Guidance of Visual Attention from Working Memory
Schubö, Anna
2015-01-01
Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences. PMID:26171241
Visual search by chimpanzees (Pan): assessment of controlling relations.
Tomonaga, M
1995-01-01
Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449
On the Temporal Relation of Top-Down and Bottom-Up Mechanisms during Guidance of Attention
ERIC Educational Resources Information Center
Wykowska, Agnieszka; Schubo, Anna
2010-01-01
Two mechanisms are said to be responsible for guiding focal attention in visual selection: bottom-up, saliency-driven capture and top-down control. These mechanisms were examined with a paradigm that combined a visual search task with postdisplay probe detection. Two SOAs between the search display and probe onsets were introduced to investigate…
ERIC Educational Resources Information Center
Lorenzo-Lopez, L.; Gutierrez, R.; Moratti, S.; Maestu, F.; Cadaveira, F.; Amenedo, E.
2011-01-01
Recently, an event-related potential (ERP) study (Lorenzo-Lopez et al., 2008) provided evidence that normal aging significantly delays and attenuates the electrophysiological correlate of the allocation of visuospatial attention (N2pc component) during a feature-detection visual search task. To further explore the effects of normal aging on the…
Interaction between numbers and size during visual search.
Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver
2017-05-01
The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numerical and physical size was either congruent or incongruent. Perceptual differences of the stimuli were controlled by a condition in which participants had to search for a differently coloured target item with the same physical size and by the usage of LCD-style numbers that were matched in visual similarity by shape transformations. The results of all three experiments consistently revealed that detecting a physically large target item is significantly faster when the numerical size of the target item is large as well (congruent), compared to when it is small (incongruent). This novel finding of a size congruity effect in visual search demonstrates an interaction between numerical and physical size in an experimental setting beyond typically used binary comparison tasks, and provides important new evidence for the notion of shared cognitive codes for numbers and sensorimotor magnitudes. Theoretical consequences for recent models on attention, magnitude representation and their interactions are discussed.
Object based implicit contextual learning: a study of eye movements.
van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel
2011-02-01
Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Productivity associated with visual status of computer users.
Daum, Kent M; Clore, Katherine A; Simms, Suzanne S; Vesely, Jon W; Wilczek, Dawn D; Spittle, Brian M; Good, Greg W
2004-01-01
The aim of this project is to examine the potential connection between the astigmatic refractive corrections of subjects using computers and their productivity and comfort. We hypothesize that improving the visual status of subjects using computers results in greater productivity, as well as improved visual comfort. Inclusion criteria required subjects 19 to 30 years of age with complete vision examinations before being enrolled. Using a double-masked, placebo-controlled, randomized design, subjects completed three experimental tasks calculated to assess the effects of refractive error on productivity (time to completion and the number of errors) at a computer. The tasks resembled those commonly undertaken by computer users and involved visual search tasks of: (1) counties and populations; (2) nonsense word search; and (3) a modified text-editing task. Estimates of productivity for time to completion varied from a minimum of 2.5% upwards to 28.7% with 2 D cylinder miscorrection. Assuming a conservative estimate of an overall 2.5% increase in productivity with appropriate astigmatic refractive correction, our data suggest a favorable cost-benefit ratio of at least 2.3 for the visual correction of an employee (total cost 268 dollars) with a salary of 25,000 dollars per year. We conclude that astigmatic refractive error affected both productivity and visual comfort under the conditions of this experiment. These data also suggest a favorable cost-benefit ratio for employers who provide computer-specific eyewear to their employees.
Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions
Morgan, Helen M.; Jackson, Margaret C.; van Koningsbruggen, Martijn G.; Shapiro, Kimron L.; Linden, David E.J.
2013-01-01
In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. PMID:22483548
Frontal and parietal theta burst TMS impairs working memory for visual-spatial conjunctions.
Morgan, Helen M; Jackson, Margaret C; van Koningsbruggen, Martijn G; Shapiro, Kimron L; Linden, David E J
2013-03-01
In tasks that selectively probe visual or spatial working memory (WM) frontal and posterior cortical areas show a segregation, with dorsal areas preferentially involved in spatial (e.g. location) WM and ventral areas in visual (e.g. object identity) WM. In a previous fMRI study [1], we showed that right parietal cortex (PC) was more active during WM for orientation, whereas left inferior frontal gyrus (IFG) was more active during colour WM. During WM for colour-orientation conjunctions, activity in these areas was intermediate to the level of activity for the single task preferred and non-preferred information. To examine whether these specialised areas play a critical role in coordinating visual and spatial WM to perform a conjunction task, we used theta burst transcranial magnetic stimulation (TMS) to induce a functional deficit. Compared to sham stimulation, TMS to right PC or left IFG selectively impaired WM for conjunctions but not single features. This is consistent with findings from visual search paradigms, in which frontal and parietal TMS selectively affects search for conjunctions compared to single features, and with combined TMS and functional imaging work suggesting that parietal and frontal regions are functionally coupled in tasks requiring integration of visual and spatial information. Our results thus elucidate mechanisms by which the brain coordinates spatially segregated processing streams and have implications beyond the field of working memory. Copyright © 2013 Elsevier Inc. All rights reserved.
When do I quit? The search termination problem in visual search.
Wolfe, Jeremy M
2012-01-01
In visual search tasks, observers look for targets in displays or scenes containing distracting, non-target items. Most of the research on this topic has concerned the finding of those targets. Search termination is a less thoroughly studied topic. When is it time to abandon the current search? The answer is fairly straight forward when the one and only target has been found (There are my keys.). The problem is more vexed if nothing has been found (When is it time to stop looking for a weapon at the airport checkpoint?) or when the number of targets is unknown (Have we found all the tumors?). This chapter reviews the development of ideas about quitting time in visual search and offers an outline of our current theory.
Neural Correlates of Changes in a Visual Search Task due to Cognitive Training in Seniors
Wild-Wall, Nele; Falkenstein, Michael; Gajewski, Patrick D.
2012-01-01
This study aimed to elucidate the underlying neural sources of near transfer after a multidomain cognitive training in older participants in a visual search task. Participants were randomly assigned to a social control, a no-contact control and a training group, receiving a 4-month paper-pencil and PC-based trainer guided cognitive intervention. All participants were tested in a before and after session with a conjunction visual search task. Performance and event-related potentials (ERPs) suggest that the cognitive training improved feature processing of the stimuli which was expressed in an increased rate of target detection compared to the control groups. This was paralleled by enhanced amplitudes of the frontal P2 in the ERP and by higher activation in lingual and parahippocampal brain areas which are discussed to support visual feature processing. Enhanced N1 and N2 potentials in the ERP for nontarget stimuli after cognitive training additionally suggest improved attention and subsequent processing of arrays which were not immediately recognized as targets. Possible test repetition effects were confined to processes of stimulus categorisation as suggested by the P3b potential. The results show neurocognitive plasticity in aging after a broad cognitive training and allow pinpointing the functional loci of effects induced by cognitive training. PMID:23029625
Display size effects in visual search: analyses of reaction time distributions as mixtures.
Reynolds, Ann; Miller, Jeff
2009-05-01
In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.
Megreya, Ahmed M.; Bindemann, Markus
2017-01-01
It is unresolved whether the permanent auditory deprivation that deaf people experience leads to the enhanced visual processing of faces. The current study explored this question with a matching task in which observers searched for a target face among a concurrent lineup of ten faces. This was compared with a control task in which the same stimuli were presented upside down, to disrupt typical face processing, and an object matching task. A sample of young-adolescent deaf observers performed with higher accuracy than hearing controls across all of these tasks. These results clarify previous findings and provide evidence for a general visual processing advantage in deaf observers rather than a face-specific effect. PMID:28117407
Conveying Clinical Reasoning Based on Visual Observation via Eye-Movement Modelling Examples
ERIC Educational Resources Information Center
Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nystrom, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit
2012-01-01
Complex perceptual tasks, like clinical reasoning based on visual observations of patients, require not only conceptual knowledge about diagnostic classes but also the skills to visually search for symptoms and interpret these observations. However, medical education so far has focused very little on how visual observation skills can be…
Implicit Object Naming in Visual Search: Evidence from Phonological Competition
Walenchok, Stephen C.; Hout, Michael C.; Goldinger, Stephen D.
2016-01-01
During visual search, people are distracted by objects that visually resemble search targets; search is impaired when targets and distractors share overlapping features. In this study, we examined whether a nonvisual form of similarity, overlapping object names, can also affect search performance. In three experiments, people searched for images of real-world objects (e.g., a beetle) among items whose names either all shared the same phonological onset (/bi/), or were phonologically varied. Participants either searched for one or three potential targets per trial, with search targets designated either visually or verbally. We examined standard visual search (Experiments 1 and 3) and a self-paced serial search task wherein participants manually rejected each distractor (Experiment 2). We hypothesized that people would maintain visual templates when searching for single targets, but would rely more on object names when searching for multiple items and when targets were verbally cued. This reliance on target names would make performance susceptible to interference from similar-sounding distractors. Experiments 1 and 2 showed the predicted interference effect in conditions with high memory load and verbal cues. In Experiment 3, eye-movement results showed that phonological interference resulted from small increases in dwell time to all distractors. The results suggest that distractor names are implicitly activated during search, slowing attention disengagement when targets and distractors share similar names. PMID:27531018
Task-relevant perceptual features can define categories in visual memory too.
Antonelli, Karla B; Williams, Carrick C
2017-11-01
Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Visual search attentional bias modification reduced social phobia in adolescents.
De Voogd, E L; Wiers, R W; Prins, P J M; Salemink, E
2014-06-01
An attentional bias for negative information plays an important role in the development and maintenance of (social) anxiety and depression, which are highly prevalent in adolescence. Attention Bias Modification (ABM) might be an interesting tool in the prevention of emotional disorders. The current study investigated whether visual search ABM might affect attentional bias and emotional functioning in adolescents. A visual search task was used as a training paradigm; participants (n = 16 adolescents, aged 13-16) had to repeatedly identify the only smiling face in a 4 × 4 matrix of negative emotional faces, while participants in the control condition (n = 16) were randomly allocated to one of three placebo training versions. An assessment version of the task was developed to directly test whether attentional bias changed due to the training. Self-reported anxiety and depressive symptoms and self-esteem were measured pre- and post-training. After two sessions of training, the ABM group showed a significant decrease in attentional bias for negative information and self-reported social phobia, while the control group did not. There were no effects of training on depressive mood or self-esteem. No correlation between attentional bias and social phobia was found, which raises questions about the validity of the attentional bias assessment task. Also, the small sample size precludes strong conclusions. Visual search ABM might be beneficial in changing attentional bias and social phobia in adolescents, but further research with larger sample sizes and longer follow-up is needed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Impaired Filtering of Behaviourally Irrelevant Visual Information in Dyslexia
ERIC Educational Resources Information Center
Roach, Neil W.; Hogben, John H.
2007-01-01
A recent proposal suggests that dyslexic individuals suffer from attentional deficiencies, which impair the ability to selectively process incoming visual information. To investigate this possibility, we employed a spatial cueing procedure in conjunction with a single fixation visual search task measuring thresholds for discriminating the…
How task demands influence scanpath similarity in a sequential number-search task.
Dewhurst, Richard; Foulsham, Tom; Jarodzka, Halszka; Johansson, Roger; Holmqvist, Kenneth; Nyström, Marcus
2018-06-07
More and more researchers are considering the omnibus eye movement sequence-the scanpath-in their studies of visual and cognitive processing (e.g. Hayes, Petrov, & Sederberg, 2011; Madsen, Larson, Loschky, & Rebello, 2012; Ni et al., 2011; von der Malsburg & Vasishth, 2011). However, it remains unclear how recent methods for comparing scanpaths perform in experiments producing variable scanpaths, and whether these methods supplement more traditional analyses of individual oculomotor statistics. We address this problem for MultiMatch (Jarodzka et al., 2010; Dewhurst et al., 2012), evaluating its performance with a visual search-like task in which participants must fixate a series of target numbers in a prescribed order. This task should produce predictable sequences of fixations and thus provide a testing ground for scanpath measures. Task difficulty was manipulated by making the targets more or less visible through changes in font and the presence of distractors or visual noise. These changes in task demands led to slower search and more fixations. Importantly, they also resulted in a reduction in the between-subjects scanpath similarity, demonstrating that participants' gaze patterns became more heterogenous in terms of saccade length and angle, and fixation position. This implies a divergent strategy or random component to eye-movement behaviour which increases as the task becomes more difficult. Interestingly, the duration of fixations along aligned vectors showed the opposite pattern, becoming more similar between observers in 2 of the 3 difficulty manipulations. This provides important information for vision scientists who may wish to use scanpath metrics to quantify variations in gaze across a spectrum of perceptual and cognitive tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Adding statistical regularity results in a global slowdown in visual search.
Vaskevich, Anna; Luria, Roy
2018-05-01
Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Brockmole, James R.; Boot, Walter R.
2009-01-01
Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color.…
Robertson, Kayela; Schmitter-Edgecombe, Maureen
2017-01-01
Impairments in attention following traumatic brain injury (TBI) can significantly impact recovery and rehabilitation effectiveness. This study investigated the multi-faceted construct of selective attention following TBI, highlighting the differences on visual nonsearch (focused attention) and search (divided attention) tasks. Participants were 30 individuals with moderate to severe TBI who were tested acutely (i.e. following emergence from PTA) and 30 age- and education-matched controls. Participants were presented with visual displays that contained either two or eight items. In the focused attention, nonsearch condition, the location of the target (if present) was cued with a peripheral arrow prior to presentation of the visual displays. In the divided attention, search condition, no spatial cue was provided prior to presentation of the visual displays. The results revealed intact focused, nonsearch, attention abilities in the acute phase of TBI recovery. In contrast, when no spatial cue was provided (divided attention condition), participants with TBI demonstrated slower visual search compared to the control group. The results of this study suggest that capitalizing on intact focused attention abilities by allocating attention during cognitively demanding tasks may help to reduce mental workload and improve rehabilitation effectiveness.
Laudate, Thomas M.; Neargarder, Sandy; Dunne, Tracy E.; Sullivan, Karen D.; Joshi, Pallavi; Gilmore, Grover C.; Riedel, Tatiana M.; Cronin-Golomb, Alice
2011-01-01
External support may improve task performance regardless of an individual’s ability to compensate for cognitive deficits through internally-generated mechanisms. We investigated if performance of a complex, familiar visual search task (the game of bingo) could be enhanced in groups with suboptimal vision by providing external support through manipulation of task stimuli. Participants were 19 younger adults, 14 individuals with probable Alzheimer’s disease (AD), 13 AD-matched healthy adults, 17 non-demented individuals with Parkinson’s disease (PD), and 20 PD-matched healthy adults. We varied stimulus contrast, size, and visual complexity during game play. The externally-supported performance interventions of increased stimulus size and decreased complexity resulted in improvements in performance by all groups. Performance improvement through increased stimulus size and decreased complexity was demonstrated by all groups. AD also obtained benefit from increasing contrast, presumably by compensating for their contrast sensitivity deficit. The general finding of improved performance across healthy and afflicted groups suggests the value of visual support as an easy-to-apply intervention to enhance cognitive performance. PMID:22066941
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
Dissociable Roles of Different Types of Working Memory Load in Visual Detection
Konstantinou, Nikos; Lavie, Nilli
2013-01-01
We contrasted the effects of different types of working memory (WM) load on detection. Considering the sensory-recruitment hypothesis of visual short-term memory (VSTM) within load theory (e.g., Lavie, 2010) led us to predict that VSTM load would reduce visual-representation capacity, thus leading to reduced detection sensitivity during maintenance, whereas load on WM cognitive control processes would reduce priority-based control, thus leading to enhanced detection sensitivity for a low-priority stimulus. During the retention interval of a WM task, participants performed a visual-search task while also asked to detect a masked stimulus in the periphery. Loading WM cognitive control processes (with the demand to maintain a random digit order [vs. fixed in conditions of low load]) led to enhanced detection sensitivity. In contrast, loading VSTM (with the demand to maintain the color and positions of six squares [vs. one in conditions of low load]) reduced detection sensitivity, an effect comparable with that found for manipulating perceptual load in the search task. The results confirmed our predictions and established a new functional dissociation between the roles of different types of WM load in the fundamental visual perception process of detection. PMID:23713796
Krummenacher, Joseph; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas
2009-03-01
Two experiments compared reaction times (RTs) in visual search for singleton feature targets defined, variably across trials, in either the color or the orientation dimension. Experiment 1 required observers to simply discern target presence versus absence (simple-detection task); Experiment 2 required them to respond to a detection-irrelevant form attribute of the target (compound-search task). Experiment 1 revealed a marked dimensional intertrial effect of 34 ms for an target defined in a changed versus a repeated dimension, and an intertrial target distance effect, with an 4-ms increase in RTs (per unit of distance) as the separation of the current relative to the preceding target increased. Conversely, in Experiment 2, the dimension change effect was markedly reduced (11 ms), while the intertrial target distance effect was markedly increased (11 ms per unit of distance). The results suggest that dimension change/repetition effects are modulated by the amount of attentional focusing required by the task, with space-based attention altering the integration of dimension-specific feature contrast signals at the level of the overall-saliency map.
Performance characteristics of a visual-search human-model observer with sparse PET image data
NASA Astrophysics Data System (ADS)
Gifford, Howard C.
2012-02-01
As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.
What is the context of contextual cueing?
Makovski, Tal
2016-12-01
People have a powerful ability to extract regularities from noisy environments and to utilize this knowledge to assist in visual search. Extensive research has shown that this ability, termed contextual cueing (CC), is robust and ubiquitous, but it is still unclear what exactly is the context that is being leaned. Researchers have typically focused on how people learn spatial configuration regularities and have hence used simplified, meaningless search stimuli. Here, observers performed visual search tasks using images of real-world objects. The results revealed that, contrary to past findings, the repetition of either arbitrary spatial information or identity information was not sufficient to produce context learning. Instead, learning was found only when both types of information were repeated together. These results were further replicated in hybrid search tasks, in which subjects looked for multiple target templates. Together, these data suggest that CC is more limited than typically assumed, yet this learning is highly robust.
Richard's, María M; Introzzi, Isabel; Zamora, Eliana; Vernucci, Santiago
2017-01-01
Inhibition is one of the main executive functions, because of its fundamental role in cognitive and social development. Given the importance of reliable and computerized measurements to assessment inhibitory performance, this research intends to analyze the internal and external criteria of validity of a computerized conjunction search task, to evaluate the role of perceptual inhibition. A sample of 41 children (21 females and 20 males), aged between 6 and 11 years old (M = 8.49, SD = 1.47), intentionally selected from a private management school of Mar del Plata (Argentina), middle socio-economic level were assessed. The Conjunction Search Task from the TAC Battery, Coding and Symbol Search tasks from Wechsler Intelligence Scale for Children were used. Overall, results allow us to confirm that the perceptual inhibition task form TAC presents solid rates of internal and external validity that make a valid measurement instrument of this process.
Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity
Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin
2017-01-01
Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739
Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.
Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin
2017-01-01
Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.
Independent and additive repetition priming of motion direction and color in visual search.
Kristjánsson, Arni
2009-03-01
Priming of visual search for Gabor patch stimuli, varying in color and local drift direction, was investigated. The task relevance of each feature varied between the different experimental conditions compared. When the target defining dimension was color, a large effect of color repetition was seen as well as a smaller effect of the repetition of motion direction. The opposite priming pattern was seen when motion direction defined the target--the effect of motion direction repetition was this time larger than for color repetition. Finally, when neither was task relevant, and the target defining dimension was the spatial frequency of the Gabor patch, priming was seen for repetition of both color and motion direction, but the effects were smaller than in the previous two conditions. These results show that features do not necessarily have to be task relevant for priming to occur. There is little interaction between priming following repetition of color and motion, these two features show independent and additive priming effects, most likely reflecting that the two features are processed at separate processing sites in the nervous system, consistent with previous findings from neuropsychology & neurophysiology. The implications of the findings for theoretical accounts of priming in visual search are discussed.
Timing of saccadic eye movements during visual search for multiple targets
Wu, Chia-Chien; Kowler, Eileen
2013-01-01
Visual search requires sequences of saccades. Many studies have focused on spatial aspects of saccadic decisions, while relatively few (e.g., Hooge & Erkelens, 1999) consider timing. We studied saccadic timing during search for targets (thin circles containing tilted lines) located among nontargets (thicker circles). Tasks required either (a) estimating the mean tilt of the lines, or (b) looking at targets without a concurrent psychophysical task. The visual similarity of targets and nontargets affected both the probability of hitting a target and the saccade rate in both tasks. Saccadic timing also depended on immediate conditions, specifically, (a) the type of currently fixated location (dwell time was longer on targets than nontargets), (b) the type of goal (dwell time was shorter prior to saccades that hit targets), and (c) the ordinal position of the saccade in the sequence. The results show that timing decisions take into account the difficulty of finding targets, as well as the cost of delays. Timing strategies may be a compromise between the attempt to find and locate targets, or other suitable landing locations, using eccentric vision (at the cost of increased dwell times) versus a strategy of exploring less selectively at a rapid rate. PMID:24049045
Using Digital Libraries Non-Visually: Understanding the Help-Seeking Situations of Blind Users
ERIC Educational Resources Information Center
Xie, Iris; Babu, Rakesh; Joo, Soohyung; Fuller, Paige
2015-01-01
Introduction: This study explores blind users' unique help-seeking situations in interacting with digital libraries. In particular, help-seeking situations were investigated at both the physical and cognitive levels. Method: Fifteen blind participants performed three search tasks, including known- item search, specific information search, and…
Visual search asymmetries within color-coded and intensity-coded displays.
Yamani, Yusuke; McCarley, Jason S
2010-06-01
Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Identifying a "default" visual search mode with operant conditioning.
Kawahara, Jun-ichiro
2010-09-01
The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.
What are the Shapes of Response Time Distributions in Visual Search?
Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.
2011-01-01
Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905
History effects in visual search for monsters: search times, choice biases, and liking.
Chetverikov, Andrey; Kristjansson, Árni
2015-02-01
Repeating targets and distractors on consecutive visual search trials facilitates search performance, whereas switching targets and distractors harms search. In addition, search repetition leads to biases in free choice tasks, in that previously attended targets are more likely to be chosen than distractors. Another line of research has shown that attended items receive high liking ratings, whereas ignored distractors are rated negatively. Potential relations between the three effects are unclear, however. Here we simultaneously measured repetition benefits and switching costs for search times, choice biases, and liking ratings in color singleton visual search for "monster" shapes. We showed that if expectations from search repetition are violated, targets are liked to be less attended than otherwise. Choice biases were, on the other hand, affected by distractor repetition, but not by target/distractor switches. Target repetition speeded search times but had little influence on choice or liking. Our findings suggest that choice biases reflect distractor inhibition, and liking reflects the conflict associated with attending to previously inhibited stimuli, while speeded search follows both target and distractor repetition. Our results support the newly proposed affective-feedback-of-hypothesis-testing account of cognition, and additionally, shed new light on the priming of visual search.
Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan
2006-10-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.
Visualizing Trumps Vision in Training Attention.
Reinhart, Robert M G; McClenahan, Laura J; Woodman, Geoffrey F
2015-07-01
Mental imagery can have powerful training effects on behavior, but how this occurs is not well understood. Here we show that even a single instance of mental imagery can improve attentional selection of a target more effectively than actually practicing visual search. By recording subjects' brain activity, we found that these imagery-induced training effects were due to perceptual attention being more effectively focused on targets following imagined training. Next, we examined the downside of this potent training by changing the target after several trials of training attention with imagery and found that imagined search resulted in more potent interference than actual practice following these target changes. Finally, we found that proactive interference from task-irrelevant elements in the visual displays appears to underlie the superiority of imagined training relative to actual practice. Our findings demonstrate that visual attention mechanisms can be effectively trained to select target objects in the absence of visual input, and this results in more effective control of attention than practicing the task itself. © The Author(s) 2015.
Yamani, Yusuke; Horrey, William J.; Liang, Yulan; Fisher, Donald L.
2016-01-01
Older drivers are at increased risk of intersection crashes. Previous work found that older drivers execute less frequent glances for detecting potential threats at intersections than middle-aged drivers. Yet, earlier work has also shown that an active training program doubled the frequency of these glances among older drivers, suggesting that these effects are not necessarily due to age-related functional declines. In light of findings, the current study sought to explore the ability of older drivers to coordinate their head and eye movements while simultaneously steering the vehicle as well as their glance behavior at intersections. In a driving simulator, older (M = 76 yrs) and middle-aged (M = 58 yrs) drivers completed different driving tasks: (1) travelling straight on a highway while scanning for peripheral information (a visual search task) and (2) navigating intersections with areas potential hazard. The results replicate that the older drivers did not execute glances for potential threats to the sides when turning at intersections as frequently as the middle-aged drivers. Furthermore, the results demonstrate costs of performing two concurrent tasks, highway driving and visual search task on the side displays: the older drivers performed more poorly on the visual search task and needed to correct their steering positions more compared to the middle-aged counterparts. The findings are consistent with the predictions and discussed in terms of a decoupling hypothesis, providing an account for the effects of the active training program. PMID:27736887
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
Rewarded visual items capture attention only in heterogeneous contexts.
Feldmann-Wüstefeld, Tobias; Brandhofer, Ruben; Schubö, Anna
2016-07-01
Reward is known to affect visual search performance. Rewarding targets can increase search performance, whereas rewarding distractors can decrease search performance. We used subcomponents of the N2pc in the event-related EEG, the NT (target negativity) and ND /PD (distractor negativity/positivity), in a visual search task to disentangle target and distractor processing related to reward. The visual search task comprised homogeneous and heterogeneous contexts in which a target and a colored distractor were embedded. After each correct trial, participants were given a monetary reward that depended on the color of the distractor. We found longer response times for displays with high-reward distractors compared to displays with low-reward distractors, indicating reward-induced interference, however, only for heterogeneous contexts. The NT component, indicative of attention deployment to the target, showed that target selection was impaired by high-reward distractors, regardless of the context homogeneity. Processing of distractors was not affected by reward in homogeneous contexts. In heterogeneous contexts, however, high-reward distractors were more likely to capture attention (ND ) and required more effort to be suppressed (PD ) than low-reward distractors. In sum the results showed that, despite the fact that target selection is impaired by high-reward distractors in both homogeneous and heterogeneous background contexts, high-reward distractors capture attention only in scenarios that foster attentional capture. © 2016 Society for Psychophysiological Research.
Finding a face in the crowd: testing the anger superiority effect in Asperger Syndrome.
Ashwin, Chris; Wheelwright, Sally; Baron-Cohen, Simon
2006-06-01
Social threat captures attention and is processed rapidly and efficiently, with many lines of research showing involvement of the amygdala. Visual search paradigms looking at social threat have shown angry faces 'pop-out' in a crowd, compared to happy faces. Autism and Asperger Syndrome (AS) are neurodevelopmental conditions characterised by social deficits, abnormal face processing, and amygdala dysfunction. We tested adults with high-functioning autism (HFA) and AS using a facial visual search paradigm with schematic neutral and emotional faces. We found, contrary to predictions, that people with HFA/AS performed similarly to controls in many conditions. However, the effect was reduced in the HFA/AS group when using widely varying crowd sizes and when faces were inverted, suggesting a difference in face-processing style may be evident even with simple schematic faces. We conclude there are intact threat detection mechanisms in AS, under simple and predictable conditions, but that like other face-perception tasks, the visual search of threat faces task reveals atypical face-processing in HFA/AS.
Visual tasks and postural sway in children with and without autism spectrum disorders.
Chang, Chih-Hui; Wade, Michael G; Stoffregen, Thomas A; Hsu, Chin-Yu; Pan, Chien-Yu
2010-01-01
We investigated the influences of two different suprapostural visual tasks, visual searching and visual inspection, on the postural sway of children with and without autism spectrum disorder (ASD). Sixteen ASD children (age=8.75±1.34 years; height=130.34±11.03 cm) were recruited from a local support group. Individuals with an intellectual disability as a co-occurring condition and those with severe behavior problems that required formal intervention were excluded. Twenty-two sex- and age-matched typically developing (TD) children (age=8.93±1.39 years; height=133.47±8.21 cm) were recruited from a local public elementary school. Postural sway was recorded using a magnetic tracking system (Flock of Birds, Ascension Technologies, Inc., Burlington, VT). Results indicated that the ASD children exhibited greater sway than the TD children. Despite this difference, both TD and ASD children showed reduced sway during the search task, relative to sway during the inspection task. These findings replicate those of Stoffregen et al. (2000), Stoffregen, Giveans, et al. (2009), Stoffregen, Villard, et al. (2009) and Prado et al. (2007) and extend them to TD children as well as ASD children. Both TD and ASD children were able to functionally modulate postural sway to facilitate the performance of a task that required higher perceptual effort. Copyright © 2010 Elsevier Ltd. All rights reserved.
The role of central attention in retrieval from visual short-term memory.
Magen, Hagit
2017-04-01
The role of central attention in visual short-term memory (VSTM) encoding and maintenance is well established, yet its role in retrieval has been largely unexplored. This study examined the involvement of central attention in retrieval from VSTM using a dual-task paradigm. Participants performed a color change-detection task. Set size varied between 1 and 3 items, and the memory sample was maintained for either a short or a long delay period. A secondary tone discrimination task was introduced at the end of the delay period, shortly before the appearance of a central probe, and occupied central attention while participants were searching within VSTM representations. Similarly to numerous previous studies, reaction time increased as a function of set size reflecting the occurrence of a capacity-limited memory search. When the color targets were maintained over a short delay, memory was searched for the most part without the involvement of central attention. However, with a longer delay period, the search relied entirely on the operation of central attention. Taken together, this study demonstrates that central attention is involved in retrieval from VSTM, but the extent of its involvement depends on the duration of the delay period. Future studies will determine whether the type of memory search (parallel or serial) carried out during retrieval depends on the nature of the attentional mechanism involved the task.
White matter tract integrity predicts visual search performance in young and older adults.
Bennett, Ilana J; Motes, Michael A; Rao, Neena K; Rypma, Bart
2012-02-01
Functional imaging research has identified frontoparietal attention networks involved in visual search, with mixed evidence regarding whether different networks are engaged when the search target differs from distracters by a single (elementary) versus multiple (conjunction) features. Neural correlates of visual search, and their potential dissociation, were examined here using integrity of white matter connecting the frontoparietal networks. The effect of aging on these brain-behavior relationships was also of interest. Younger and older adults performed a visual search task and underwent diffusion tensor imaging (DTI) to reconstruct 2 frontoparietal (superior and inferior longitudinal fasciculus; SLF and ILF) and 2 midline (genu, splenium) white matter tracts. As expected, results revealed age-related declines in conjunction, but not elementary, search performance; and in ILF and genu tract integrity. Importantly, integrity of the superior longitudinal fasciculus, ILF, and genu tracts predicted search performance (conjunction and elementary), with no significant age group differences in these relationships. Thus, integrity of white matter tracts connecting frontoparietal attention networks contributes to search performance in younger and older adults. Copyright © 2012 Elsevier Inc. All rights reserved.
White Matter Tract Integrity Predicts Visual Search Performance in Young and Older Adults
Bennett, Ilana J.; Motes, Michael A.; Rao, Neena K.; Rypma, Bart
2011-01-01
Functional imaging research has identified fronto-parietal attention networks involved in visual search, with mixed evidence regarding whether different networks are engaged when the search target differs from distracters by a single (elementary) versus multiple (conjunction) features. Neural correlates of visual search, and their potential dissociation, were examined here using integrity of white matter connecting the fronto-parietal networks. The effect of aging on these brain-behavior relationships was also of interest. Younger and older adults performed a visual search task and underwent diffusion tensor imaging (DTI) to reconstruct two fronto-parietal (superior and inferior longitudinal fasciculus, SLF and ILF) and two midline (genu, splenium) white matter tracts. As expected, results revealed age-related declines in conjunction, but not elementary, search performance; and in ILF and genu tract integrity. Importantly, integrity of the SLF, ILF, and genu tracts predicted search performance (conjunction and elementary), with no significant age group differences in these relationships. Thus, integrity of white matter tracts connecting fronto-parietal attention networks contributes to search performance in younger and older adults. PMID:21402431
fMRI of parents of children with Asperger Syndrome: a pilot study.
Baron-Cohen, Simon; Ring, Howard; Chitnis, Xavier; Wheelwright, Sally; Gregory, Lloyd; Williams, Steve; Brammer, Mick; Bullmore, Ed
2006-06-01
People with autism or Asperger Syndrome (AS) show altered patterns of brain activity during visual search and emotion recognition tasks. Autism and AS are genetic conditions and parents may show the 'broader autism phenotype.' (1) To test if parents of children with AS show atypical brain activity during a visual search and an empathy task; (2) to test for sex differences during these tasks at the neural level; (3) to test if parents of children with autism are hyper-masculinized, as might be predicted by the 'extreme male brain' theory. We used fMRI during a visual search task (the Embedded Figures Test (EFT)) and an emotion recognition test (the 'Reading the Mind in the Eyes' (or Eyes) test). Twelve parents of children with AS, vs. 12 sex-matched controls. Factorial analysis was used to map main effects of sex, group (parents vs. controls), and sexxgroup interaction on brain function. An ordinal ANOVA also tested for regions of brain activity where females>males>fathers=mothers, to test for parental hyper-masculinization. RESULTS ON EFT TASK: Female controls showed more activity in extrastriate cortex than male controls, and both mothers and fathers showed even less activity in this area than sex-matched controls. There were no differences in group activation between mothers and fathers of children with AS. The ordinal ANOVA identified two specific regions in visual cortex (right and left, respectively) that showed the pattern Females>Males>Fathers=Mothers, both in BA 19. RESULTS ON EYES TASK: Male controls showed more activity in the left inferior frontal gyrus than female controls, and both mothers and fathers showed even more activity in this area compared to sex-matched controls. Female controls showed greater bilateral inferior frontal activation than males. This was not seen when comparing mothers to males, or mothers to fathers. The ordinal ANOVA identified two specific regions that showed the pattern Females>Males>Mothers=Fathers: left medial temporal gyrus (BA 21) and left dorsolateral prefrontal cortex (BA 44). Parents of children with AS show atypical brain function during both visual search and emotion recognition, in the direction of hyper-masculinization of the brain. Because of the small sample size, and lack of age-matching between parents and controls, such results constitute a pilot study that needs replicating with larger samples.
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
Pasqualotti, Léa; Baccino, Thierry
2014-01-01
Most of studies about online advertisements have indicated that they have a negative impact on users' cognitive processes, especially when they include colorful or animated banners and when they are close to the text to be read. In the present study we assessed the effects of two advertisements features-distance from the text and the animation-on visual strategies during a word-search task and a reading-for-comprehension task using Web-like pages. We hypothesized that the closer the advertisement was to the target text, the more cognitive processing difficulties it would cause. We also hypothesized that (1) animated banners would be more disruptive than static advertisements and (2) banners would have more effect on word-search performance than reading-for-comprehension performance. We used an automatic classifier to assess variations in use of Scanning and Reading visual strategies during task performance. The results showed that the effect of dynamic and static advertisements on visual strategies varies according to the task. Fixation duration indicated that the closest advertisements slowed down information processing but there was no difference between the intermediate (40 pixel) and far (80 pixel) distance conditions. Our findings suggest that advertisements have a negative impact on users' performance mostly when a lots of cognitive resources are required as for reading-for-comprehension.
MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.
Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn
2013-12-01
We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.
Demelo, Jonathan; Parsons, Paul; Sedig, Kamran
2017-02-02
Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. ©Jonathan Demelo, Paul Parsons, Kamran Sedig. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.02.2017.
Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE
2017-01-01
Background Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Objective Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. Methods We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Results Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Conclusions Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. PMID:28153818
Functional size of human visual area V1: a neural correlate of top-down attention.
Verghese, Ashika; Kolbe, Scott C; Anderson, Andrew J; Egan, Gary F; Vidyasagar, Trichur R
2014-06-01
Heavy demands are placed on the brain's attentional capacity when selecting a target item in a cluttered visual scene, or when reading. It is widely accepted that such attentional selection is mediated by top-down signals from higher cortical areas to early visual areas such as the primary visual cortex (V1). Further, it has also been reported that there is considerable variation in the surface area of V1. This variation may impact on either the number or specificity of attentional feedback signals and, thereby, the efficiency of attentional mechanisms. In this study, we investigated whether individual differences between humans performing attention-demanding tasks can be related to the functional area of V1. We found that those with a larger representation in V1 of the central 12° of the visual field as measured using BOLD signals from fMRI were able to perform a serial search task at a faster rate. In line with recent suggestions of the vital role of visuo-spatial attention in reading, the speed of reading showed a strong positive correlation with the speed of visual search, although it showed little correlation with the size of V1. The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections. Copyright © 2014 Elsevier Inc. All rights reserved.
Strategic search from long-term memory: an examination of semantic and autobiographical recall.
Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J
2014-01-01
Searching long-term memory is theoretically driven by both directed (search strategies) and random components. In the current study we conducted four experiments evaluating strategic search in semantic and autobiographical memory. Participants were required to generate either exemplars from the category of animals or the names of their friends for several minutes. Self-reported strategies suggested that participants typically relied on visualization strategies for both tasks and were less likely to rely on ordered strategies (e.g., alphabetic search). When participants were instructed to use particular strategies, the visualization strategy resulted in the highest levels of performance and the most efficient search, whereas ordered strategies resulted in the lowest levels of performance and fairly inefficient search. These results are consistent with the notion that retrieval from long-term memory is driven, in part, by search strategies employed by the individual, and that one particularly efficient strategy is to visualize various situational contexts that one has experienced in the past in order to constrain the search and generate the desired information.
Homonymous Visual Field Loss and Its Impact on Visual Exploration: A Supermarket Study.
Kasneci, Enkelejda; Sippel, Katrin; Heister, Martin; Aehling, Katrin; Rosenstiel, Wolfgang; Schiefer, Ulrich; Papageorgiou, Elena
2014-10-01
Homonymous visual field defects (HVFDs) may critically interfere with quality of life. The aim of this study was to assess the impact of HVFDs on a supermarket search task and to investigate the influence of visual search on task performance. Ten patients with HVFDs (four with a right-sided [HR] and six with a left-sided defect [HL]), and 10 healthy-sighted, sex-, and age-matched control subjects were asked to collect 20 products placed on two supermarket shelves as quickly as possible. Task performance was rated as "passed" or "failed" with regard to the time per correctly collected item ( T C -failed = 4.84 seconds based on the performance of healthy subjects). Eye movements were analyzed regarding the horizontal gaze activity, glance frequency, and glance proportion for different VF areas. Seven of 10 HVFD patients (three HR, four HL) passed the supermarket search task. Patients who passed needed significantly less time per correctly collected item and looked more frequently toward the VFD area than patients who failed. HL patients who passed the test showed a higher percentage of glances beyond the 60° VF ( P < 0.05). A considerable number of HVFD patients performed successfully and could compensate for the HVFD by shifting the gaze toward the peripheral VF and the VFD area. These findings provide new insights on gaze adaptations in patients with HVFDs during activities of daily living and will enhance the design and development of realistic examination tools for use in the clinical setting to improve daily functioning. (http://www.clinicaltrials.gov, NCT01372319, NCT01372332).
Williamson, Ross S.; Hancock, Kenneth E.; Shinn-Cunningham, Barbara G.; Polley, Daniel B.
2015-01-01
SUMMARY Active search is a ubiquitous goal-driven behavior wherein organisms purposefully investigate the sensory environment to locate a target object. During active search, brain circuits analyze a stream of sensory information from the external environment, adjusting for internal signals related to self-generated movement or “top-down” weighting of anticipated target and distractor properties. Sensory responses in the cortex can be modulated by internal state [1–9], though the extent and form of modulation arising in the cortex de novo versus an inheritance from subcortical stations is not clear [4, 8–12]. We addressed this question by simultaneously recording from auditory and visual regions of the thalamus (MG and LG, respectively) while mice used dynamic auditory or visual feedback to search for a hidden target within an annular track. Locomotion was associated with strongly suppressed responses and reduced decoding accuracy in MG but a subtle increase in LG spiking. Because stimuli in one modality provided critical information about target location while the other served as a distractor, we could also estimate the importance of task relevance in both thalamic subdivisions. In contrast to the effects of locomotion, we found that LG responses were reduced overall yet decoded stimuli more accurately when vision was behaviorally relevant, whereas task relevance had little effect on MG responses. This double dissociation between the influences of task relevance and movement in MG and LG highlights a role for extrasensory modulation in the thalamus but also suggests key differences in the organization of modulatory circuitry between the auditory and visual pathways. PMID:26119749
An Empirical Comparison of Visualization Tools To Assist Information Retrieval on the Web.
ERIC Educational Resources Information Center
Heo, Misook; Hirtle, Stephen C.
2001-01-01
Discusses problems with navigation in hypertext systems, including cognitive overload, and describes a study that tested information visualization techniques to see which best represented the underlying structure of Web space. Considers the effects of visualization techniques on user performance on information searching tasks and the effects of…
Spatial Working Memory Interferes with Explicit, but Not Probabilistic Cuing of Spatial Attention
ERIC Educational Resources Information Center
Won, Bo-Yeong; Jiang, Yuhong V.
2015-01-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal…
Guedry, F E; Benson, A J; Moore, H J
1982-06-01
Visual search within a head-fixed display consisting of a 12 X 12 digit matrix is degraded by whole-body angular oscillation at 0.02 Hz (+/- 155 degrees/s peak velocity), and signs and symptoms of motion sickness are prominent in a number of individuals within a 5-min exposure. Exposure to 2.5 Hz (+/- 20 degrees/s peak velocity) produces equivalent degradation of the visual search task, but does not produce signs and symptoms of motion sickness within a 5-min exposure.
Named Entity Recognition in a Hungarian NL Based QA System
NASA Astrophysics Data System (ADS)
Tikkl, Domonkos; Szidarovszky, P. Ferenc; Kardkovacs, Zsolt T.; Magyar, Gábor
In WoW project our purpose is to create a complex search interface with the following features: search in the deep web content of contracted partners' databases, processing Hungarian natural language (NL) questions and transforming them to SQL queries for database access, image search supported by a visual thesaurus that describes in a structural form the visual content of images (also in Hungarian). This paper primarily focuses on a particular problem of question processing task: the entity recognition. Before going into details we give a short overview of the project's aims.
Working memory dependence of spatial contextual cueing for visual search.
Pollmann, Stefan
2018-05-10
When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers' explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory-guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top-down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory-guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto-occipital cortex are jointly activated by visual working memory and contextual cueing. © 2018 The British Psychological Society.
Wang, Min; Yang, Ping; Wan, Chaoyang; Jin, Zhenlan; Zhang, Junjun; Li, Ling
2018-01-01
The contents of working memory (WM) can affect the subsequent visual search performance, resulting in either beneficial or cost effects, when the visual search target is included in or spatially dissociated from the memorized contents, respectively. The right dorsolateral prefrontal cortex (rDLPFC) and the right posterior parietal cortex (rPPC) have been suggested to be associated with the congruence/incongruence effects of the WM content and the visual search target. Thus, in the present study, we investigated the role of the dorsolateral prefrontal cortex and the PPC in controlling the interaction between WM and attention during a visual search, using repetitive transcranial magnetic stimulation (rTMS). Subjects maintained a color in WM while performing a search task. The color cue contained the target (valid), the distractor (invalid) or did not reappear in the search display (neutral). Concurrent stimulation with the search onset showed that relative to rTMS over the vertex, rTMS over rPPC and rDLPFC further decreased the search reaction time, when the memory cue contained the search target. The results suggest that the rDLPFC and the rPPC are critical for controlling WM biases in human visual attention.
Beesley, Tom; Hanafi, Gunadi; Vadillo, Miguel A; Shanks, David R; Livesey, Evan J
2018-05-01
Two experiments examined biases in selective attention during contextual cuing of visual search. When participants were instructed to search for a target of a particular color, overt attention (as measured by the location of fixations) was biased strongly toward distractors presented in that same color. However, when participants searched for targets that could be presented in 1 of 2 possible colors, overt attention was not biased between the different distractors, regardless of whether these distractors predicted the location of the target (repeating) or did not (randomly arranged). These data suggest that selective attention in visual search is guided only by the demands of the target detection task (the attentional set) and not by the predictive validity of the distractor elements. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Iconic memory requires attention
Persuh, Marjan; Genzer, Boris; Melara, Robert D.
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389
Iconic memory requires attention.
Persuh, Marjan; Genzer, Boris; Melara, Robert D
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.
Underestimating numerosity of items in visual search tasks.
Cassenti, Daniel N; Kelley, Troy D; Ghirardelli, Thomas G
2010-10-01
Previous research on numerosity judgments addressed attended items, while the present research addresses underestimation for unattended items in visual search tasks. One potential cause of underestimation for unattended items is that estimates of quantity may depend on viewing a large portion of the display within foveal vision. Another theory follows from the occupancy model: estimating quantity of items in greater proximity to one another increases the likelihood of an underestimation error. Three experimental manipulations addressed aspects of underestimation for unattended items: the size of the distracters, the distance of the target from fixation, and whether items were clustered together. Results suggested that the underestimation effect for unattended items was best explained within a Gestalt grouping framework.
Effects of Peripheral Visual Field Loss on Eye Movements During Visual Search
Wiecek, Emily; Pasquale, Louis R.; Fiser, Jozsef; Dakin, Steven; Bex, Peter J.
2012-01-01
Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search. PMID:23162511
Eye movements and the span of the effective stimulus in visual search.
Bertera, J H; Rayner, K
2000-04-01
The span of the effective stimulus during visual search through an unstructured alphanumeric array was investigated by using eye-contingent-display changes while the subjects searched for a target letter. In one condition, a window exposing the search array moved in synchrony with the subjects' eye movements, and the size of the window was varied. Performance reached asymptotic levels when the window was 5 degrees. In another condition, a foveal mask moved in synchrony with each eye movement, and the size of the mask was varied. The foveal mask conditions were much more detrimental to search behavior than the window conditions, indicating the importance of foveal vision during search. The size of the array also influenced performance, but performance reached asymptote for all array sizes tested at the same window size, and the effect of the foveal mask was the same for all array sizes. The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search.
Investigating the role of visual and auditory search in reading and developmental dyslexia
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014
Investigating the role of visual and auditory search in reading and developmental dyslexia.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.
Running the figure to the ground: figure-ground segmentation during visual search.
Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel
2014-04-01
We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.
Location cue validity affects inhibition of return of visual processing.
Wright, R D; Richard, C M
2000-01-01
Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.
Rare, but obviously there: effects of target frequency and salience on visual search accuracy.
Biggs, Adam T; Adamo, Stephen H; Mitroff, Stephen R
2014-10-01
Accuracy can be extremely important for many visual search tasks. However, numerous factors work to undermine successful search. Several negative influences on search have been well studied, yet one potentially influential factor has gone almost entirely unexplored-namely, how is search performance affected by the likelihood that a specific target might appear? A recent study demonstrated that when specific targets appear infrequently (i.e., once in every thousand trials) they were, on average, not often found. Even so, some infrequently appearing targets were actually found quite often, suggesting that the targets' frequency is not the only factor at play. Here, we investigated whether salience (i.e., the extent to which an item stands out during search) could explain why some infrequent targets are easily found whereas others are almost never found. Using the mobile application Airport Scanner, we assessed how individual target frequency and salience interacted in a visual search task that included a wide array of targets and millions of trials. Target frequency and salience were both significant predictors of search accuracy, although target frequency explained more of the accuracy variance. Further, when examining only the rarest target items (those that appeared on less than 0.15% of all trials), there was a significant relationship between salience and accuracy such that less salient items were less likely to be found. Beyond implications for search theory, these data suggest significant vulnerability for real-world searches that involve targets that are both infrequent and hard-to-spot. Copyright © 2014 Elsevier B.V. All rights reserved.
Dilution: atheoretical burden or just load? A reply to Tsal and Benoni (2010).
Lavie, Nilli; Torralbo, Ana
2010-12-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere "dilution") for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load.
Lavie, Nilli; Torralbo, Ana
2010-01-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere “dilution”) for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load. PMID:21133554
The Effect of Animated Banner Advertisements on a Visual Search Task
2001-01-01
experimental result calls into question previous advertising tips suggested by WebWeek, cited in [17]. In 1996, the online magazine recommended that site...prone in the presence of animated banners. Keywords Animation, visual search, banner advertisements , flashing INTRODUCTION As processor and Internet...is the best way to represent the selection tool in a toolbar, where each icon must fit in a small area? Photoshop and other popular painting programs
Visual search for features and conjunctions following declines in the useful field of view.
Cosman, Joshua D; Lees, Monica N; Lee, John D; Rizzo, Matthew; Vecera, Shaun P
2012-01-01
BACKGROUND/STUDY CONTEXT: Typical measures for assessing the useful field (UFOV) of view involve many components of attention. The objective of the current experiment was to examine differences in visual search efficiency for older individuals with and without UFOV impairment. The authors used a computerized screening instrument to assess the useful field of view and to characterize participants as having an impaired or normal UFOV. Participants also performed two visual search tasks, a feature search (e.g., search for a green target among red distractors) or a conjunction search (e.g., a green target with a gap on its left or right side among red distractors with gaps on the left or right and green distractors with gaps on the top or bottom). Visual search performance did not differ between UFOV impaired and unimpaired individuals when searching for a basic feature. However, search efficiency was lower for impaired individuals than unimpaired individuals when searching for a conjunction of features. The results suggest that UFOV decline in normal aging is associated with conjunction search. This finding suggests that the underlying cause of UFOV decline may arise from an overall decline in attentional efficiency. Because the useful field of view is a reliable predictor of driving safety, the results suggest that decline in the everyday visual behavior of older adults might arise from attentional declines.
2013-10-07
from the SAGAT and designed to assess the perception and comprehension components of SA. Asking questions of the par- ticipant after each trial...hazards were no closer than 3° of visual angle from each other. This design ensured that targets and hazards could not co- occur in the same...vehicle triggered the payload task, whereby the operator performed a visual search task to identify an object, such as a ship or a car , in the payload
Automatic guidance of attention during real-world visual search.
Seidl-Rathkopf, Katharina N; Turk-Browne, Nicholas B; Kastner, Sabine
2015-08-01
Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, because the features, locations, and times of appearance of relevant objects often are not known in advance. Thus, a mechanism by which attention is automatically biased toward information that is potentially relevant may be helpful. We tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of nonmatching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty.
Sung, Kyongje
2008-12-01
Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.
Fixation and saliency during search of natural scenes: the case of visual agnosia.
Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey
2009-07-01
Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.
Why are there eccentricity effects in visual search? Visual and attentional hypotheses.
Wolfe, J M; O'Neill, P; Bennett, S C
1998-01-01
In standard visual search experiments, observers search for a target item among distracting items. The locations of target items are generally random within the display and ignored as a factor in data analysis. Previous work has shown that targets presented near fixation are, in fact, found more efficiently than are targets presented at more peripheral locations. This paper proposes that the primary cause of this "eccentricity effect" (Carrasco, Evert, Chang, & Katz, 1995) is an attentional bias that allocates attention preferentially to central items. The first four experiments dealt with the possibility that visual, and not attentional, factors underlie the eccentricity effect. They showed that the eccentricity effect cannot be accounted for by the peripheral reduction in visual sensitivity, peripheral crowding, or cortical magnification. Experiment 5 tested the attention allocation model and also showed that RT x set size effects can be independent of eccentricity effects. Experiment 6 showed that the effective set size in a search task depends, in part, on the eccentricity of the target because observers search from fixation outward.
Impact of Glaucoma and Dry Eye on Text-Based Searching.
Sun, Michelle J; Rubin, Gary S; Akpek, Esen K; Ramulu, Pradeep Y
2017-06-01
We determine if visual field loss from glaucoma and/or measures of dry eye severity are associated with difficulty searching, as judged by slower search times on a text-based search task. Glaucoma patients with bilateral visual field (VF) loss, patients with clinically significant dry eye, and normally-sighted controls were enrolled from the Wilmer Eye Institute clinics. Subjects searched three Yellow Pages excerpts for a specific phone number, and search time was recorded. A total of 50 glaucoma subjects, 40 dry eye subjects, and 45 controls completed study procedures. On average, glaucoma patients exhibited 57% longer search times compared to controls (95% confidence interval [CI], 26%-96%, P < 0.001), and longer search times were noted among subjects with greater VF loss ( P < 0.001), worse contrast sensitivity ( P < 0.001), and worse visual acuity ( P = 0.026). Dry eye subjects demonstrated similar search times compared to controls, though worse Ocular Surface Disease Index (OSDI) vision-related subscores were associated with longer search times ( P < 0.01). Search times showed no association with OSDI symptom subscores ( P = 0.20) or objective measures of dry eye ( P > 0.08 for Schirmer's testing without anesthesia, corneal fluorescein staining, and tear film breakup time). Text-based visual search is slower for glaucoma patients with greater levels of VF loss and dry eye patients with greater self-reported visual difficulty, and these difficulties may contribute to decreased quality of life in these groups. Visual search is impaired in glaucoma and dry eye groups compared to controls, highlighting the need for compensatory strategies and tools to assist individuals in overcoming their deficiencies.
Deployment of spatial attention towards locations in memory representations. An EEG study.
Leszczyński, Marcin; Wykowska, Agnieszka; Perez-Osorio, Jairo; Müller, Hermann J
2013-01-01
Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.
HOW DO RADIOLOGISTS USE THE HUMAN SEARCH ENGINE?
Wolfe, Jeremy M; Evans, Karla K; Drew, Trafton; Aizenman, Avigael; Josephs, Emilie
2016-06-01
Radiologists perform many 'visual search tasks' in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human 'search engine' works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search? © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Speed of Serial Attention Shifts in Visual Search: Evidence from the N2pc Component.
Grubert, Anna; Eimer, Martin
2016-02-01
Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.
Coordinating Cognition: The Costs and Benefits of Shared Gaze during Collaborative Search
ERIC Educational Resources Information Center
Brennan, Susan E.; Chen, Xin; Dickinson, Christopher A.; Neider, Mark B.; Zelinsky, Gregory J.
2008-01-01
Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one…
Huang, Liqiang; Mo, Lei; Li, Ying
2012-04-01
A large part of the empirical research in the field of visual attention has focused on various concrete paradigms. However, as yet, there has been no clear demonstration of whether or not these paradigms are indeed measuring the same underlying construct. We collected a very large data set (nearly 1.3 million trials) to address this question. We tested 257 participants on nine paradigms: conjunction search, configuration search, counting, tracking, feature access, spatial pattern, response selection, visual short-term memory, and change blindness. A fairly general attention factor was identified. Some of the participants were also tested on eight other paradigms. This general attention factor was found to be correlated with intelligence, visual marking, task switching, mental rotation, and Stroop task. On the other hand, a few paradigms that are very important in the attention literature (attentional capture, consonance-driven orienting, and inhibition of return) were found to be dissociated from this general attention factor.
Visual search and spatial attention: ERPs in focussed and divided attention conditions.
Wijers, A A; Okita, T; Mulder, G; Mulder, L J; Lorist, M M; Poiesz, R; Scheffers, M K
1987-08-01
ERPs and performance were measured in divided and focussed attention visual search tasks. In focussed attention tasks, to-be-attended and to-be-ignored letters were presented simultaneously. We varied display load, mapping conditions and display size. RT, P3b-latency and negativity in the ERP associated with controlled search all increased with display load. Each of these measures showed selectivity of controlled search, in that they decreased with focussing of attention. An occipital N230, on the other hand, was not sensitive to focussing of attention, but was primarily affected by display load. ERPs to both attended and unattended targets in focussed attention conditions showed and N2 compared to nontargets, suggesting that both automatic and controlled letter classifications are possible. These effects were not affected by display size. Consistent mapping resulted in shorter RT and P3b-latency in divided attention conditions, compared to varied mapping conditions, but had no effect in focussed attention conditions.
Evidence for an attentional component of inhibition of return in visual search.
Pierce, Allison M; Crouse, Monique D; Green, Jessica J
2017-11-01
Inhibition of return (IOR) is typically described as an inhibitory bias against returning attention to a recently attended location as a means of promoting efficient visual search. Most studies examining IOR, however, either do not use visual search paradigms or do not effectively isolate attentional processes, making it difficult to conclusively link IOR to a bias in attention. Here, we recorded ERPs during a simple visual search task designed to isolate the attentional component of IOR to examine whether an inhibitory bias of attention is observed and, if so, how it influences visual search behavior. Across successive visual search displays, we found evidence of both a broad, hemisphere-wide inhibitory bias of attention along with a focal, target location-specific facilitation. When the target appeared in the same visual hemifield in successive searches, responses were slower and the N2pc component was reduced, reflecting a bias of attention away from the previously attended side of space. When the target occurred at the same location in successive searches, responses were facilitated and the P1 component was enhanced, likely reflecting spatial priming of the target. These two effects are combined in the response times, leading to a reduction in the IOR effect for repeated target locations. Using ERPs, however, these two opposing effects can be isolated in time, demonstrating that the inhibitory biasing of attention still occurs even when response-time slowing is ameliorated by spatial priming. © 2017 Society for Psychophysiological Research.
Learned face-voice pairings facilitate visual search.
Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia
2015-04-01
Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.
Changing viewer perspectives reveals constraints to implicit visual statistical learning.
Jiang, Yuhong V; Swallow, Khena M
2014-10-07
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.
Gazing into Thin Air: The Dual-Task Costs of Movement Planning and Execution during Adaptive Gait
Ellmers, Toby J.; Cocks, Adam J.; Doumas, Michail; Williams, A. Mark; Young, William R.
2016-01-01
We examined the effect of increased cognitive load on visual search behavior and measures of gait performance during locomotion. Also, we investigated how personality traits, specifically the propensity to consciously control or monitor movements (trait movement ‘reinvestment’), impacted the ability to maintain effective gaze under conditions of cognitive load. Healthy young adults traversed a novel adaptive walking path while performing a secondary serial subtraction task. Performance was assessed using correct responses to the cognitive task, gaze behavior, stepping accuracy, and time to complete the walking task. When walking while simultaneously carrying out the secondary serial subtraction task, participants visually fixated on task-irrelevant areas ‘outside’ the walking path more often and for longer durations of time, and fixated on task-relevant areas ‘inside’ the walkway for shorter durations. These changes were most pronounced in high-trait-reinvesters. We speculate that reinvestment-related processes placed an additional cognitive demand upon working memory. These increased task-irrelevant ‘outside’ fixations were accompanied by slower completion rates on the walking task and greater gross stepping errors. Findings suggest that attention is important for the maintenance of effective gaze behaviors, supporting previous claims that the maladaptive changes in visual search observed in high-risk older adults may be a consequence of inefficiencies in attentional processing. Identifying the underlying attentional processes that disrupt effective gaze behaviour during locomotion is an essential step in the development of rehabilitation, with this information allowing for the emergence of interventions that reduce the risk of falling. PMID:27824937
Task-dependent individual differences in prefrontal connectivity.
Biswal, Bharat B; Eldreth, Dana A; Motes, Michael A; Rypma, Bart
2010-09-01
Recent advances in neuroimaging have permitted testing of hypotheses regarding the neural bases of individual differences, but this burgeoning literature has been characterized by inconsistent results. To test the hypothesis that differences in task demands could contribute to between-study variability in brain-behavior relationships, we had participants perform 2 tasks that varied in the extent of cognitive involvement. We examined connectivity between brain regions during a low-demand vigilance task and a higher-demand digit-symbol visual search task using Granger causality analysis (GCA). Our results showed 1) Significant differences in numbers of frontoparietal connections between low- and high-demand tasks 2) that GCA can detect activity changes that correspond with task-demand changes, and 3) faster participants showed more vigilance-related activity than slower participants, but less visual-search activity. These results suggest that relatively low-demand cognitive performance depends on spontaneous bidirectionally fluctuating network activity, whereas high-demand performance depends on a limited, unidirectional network. The nature of brain-behavior relationships may vary depending on the extent of cognitive demand. High-demand network activity may reflect the extent to which individuals require top-down executive guidance of behavior for successful task performance. Low-demand network activity may reflect task- and performance monitoring that minimizes executive requirements for guidance of behavior.
Task-Dependent Individual Differences in Prefrontal Connectivity
Biswal, Bharat B.; Eldreth, Dana A.; Motes, Michael A.
2010-01-01
Recent advances in neuroimaging have permitted testing of hypotheses regarding the neural bases of individual differences, but this burgeoning literature has been characterized by inconsistent results. To test the hypothesis that differences in task demands could contribute to between-study variability in brain-behavior relationships, we had participants perform 2 tasks that varied in the extent of cognitive involvement. We examined connectivity between brain regions during a low-demand vigilance task and a higher-demand digit–symbol visual search task using Granger causality analysis (GCA). Our results showed 1) Significant differences in numbers of frontoparietal connections between low- and high-demand tasks 2) that GCA can detect activity changes that correspond with task-demand changes, and 3) faster participants showed more vigilance-related activity than slower participants, but less visual-search activity. These results suggest that relatively low-demand cognitive performance depends on spontaneous bidirectionally fluctuating network activity, whereas high-demand performance depends on a limited, unidirectional network. The nature of brain-behavior relationships may vary depending on the extent of cognitive demand. High-demand network activity may reflect the extent to which individuals require top-down executive guidance of behavior for successful task performance. Low-demand network activity may reflect task- and performance monitoring that minimizes executive requirements for guidance of behavior. PMID:20064942
Karatekin, C; Asarnow, R F
1998-10-01
This study tested the hypotheses that visual search impairments in schizophrenia are due to a delay in initiation of search or a slow rate of serial search. We determined the specificity of these impairments by comparing children with schizophrenia to children with attention-deficit hyperactivity disorder (ADHD) and age-matched normal children. The hypotheses were tested within the framework of feature integration theory by administering children tasks tapping parallel and serial search. Search rate was estimated from the slope of the search functions, and duration of the initial stages of search from time to make the first saccade on each trial. As expected, manual response times were elevated in both clinical groups. Contrary to expectation, ADHD, but not schizophrenic, children were delayed in initiation of serial search. Finally, both groups showed a clear dissociation between intact parallel search rates and slowed serial search rates.
Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention.
Won, Bo-Yeong; Jiang, Yuhong V
2015-05-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here, we show that the close relationship between these 2 constructs is limited to some but not all forms of spatial attention. In 5 experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval, they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention. (c) 2015 APA, all rights reserved).
Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention
Won, Bo-Yeong; Jiang, Yuhong V.
2014-01-01
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here we show that the close relationship between these two constructs is limited to some but not all forms of spatial attention. In five experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning, or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention. PMID:25401460
Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2014-04-01
Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.
Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents.
Manjaly, Zina M; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E; Brieber, Sarah; Marshall, John C; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R
2007-03-01
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is "weak central coherence", i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients.
Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents
Manjaly, Zina M.; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E.; Brieber, Sarah; Marshall, John C.; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R.
2007-01-01
Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is “weak central coherence”, i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients. PMID:17240169
NASA Astrophysics Data System (ADS)
Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.
2014-02-01
Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.
Belkaid, Marwen; Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call 'Emotional Metacontrol'. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks.
WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.
Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno
2012-09-01
The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.
Brockmole, James R; Boot, Walter R
2009-06-01
Distinctive aspects of a scene can capture attention even when they are irrelevant to one's goals. The authors address whether visually unique, unexpected, but task-irrelevant features also tend to hold attention. Observers searched through displays in which the color of each item was irrelevant. At the start of search, all objects changed color. Critically, the foveated item changed to an unexpected color (it was novel), became a color singleton (it was unique), or both. Saccade latency revealed the time required to disengage overt attention from this object. Singletons resulted in longer latencies, but only if they were unexpected. Conversely, unexpected items only delayed disengagement if they were singletons. Thus, the time spent overtly attending to an object is determined, at least in part, by task-irrelevant stimulus properties, but this depends on the confluence of expectation and visual salience. (c) 2009 APA, all rights reserved.
Neglect assessment as an application of virtual reality.
Broeren, J; Samuelsson, H; Stibrant-Sunnerhagen, K; Blomstrand, C; Rydmark, M
2007-09-01
In this study a cancellation task in a virtual environment was applied to describe the pattern of search and the kinematics of hand movements in eight patients with right hemisphere stroke. Four of these patients had visual neglect and four had recovered clinically from initial symptoms of neglect. The performance of the patients was compared with that of a control group consisting of eight subjects with no history of neurological deficits. Patients with neglect as well as patients clinically recovered from neglect showed aberrant search performance in the virtual reality (VR) task, such as mixed search pattern, repeated target pressures and deviating hand movements. The results indicate that in patients with a right hemispheric stroke, this VR application can provide an additional tool for assessment that can identify small variations otherwise not detectable with standard paper-and-pencil tests. VR technology seems to be well suited for the assessment of visually guided manual exploration in space.
Homonymous Visual Field Loss and Its Impact on Visual Exploration: A Supermarket Study
Kasneci, Enkelejda; Sippel, Katrin; Heister, Martin; Aehling, Katrin; Rosenstiel, Wolfgang; Schiefer, Ulrich; Papageorgiou, Elena
2014-01-01
Purpose Homonymous visual field defects (HVFDs) may critically interfere with quality of life. The aim of this study was to assess the impact of HVFDs on a supermarket search task and to investigate the influence of visual search on task performance. Methods Ten patients with HVFDs (four with a right-sided [HR] and six with a left-sided defect [HL]), and 10 healthy-sighted, sex-, and age-matched control subjects were asked to collect 20 products placed on two supermarket shelves as quickly as possible. Task performance was rated as “passed” or “failed” with regard to the time per correctly collected item (TC -failed = 4.84 seconds based on the performance of healthy subjects). Eye movements were analyzed regarding the horizontal gaze activity, glance frequency, and glance proportion for different VF areas. Results Seven of 10 HVFD patients (three HR, four HL) passed the supermarket search task. Patients who passed needed significantly less time per correctly collected item and looked more frequently toward the VFD area than patients who failed. HL patients who passed the test showed a higher percentage of glances beyond the 60° VF (P < 0.05). Conclusion A considerable number of HVFD patients performed successfully and could compensate for the HVFD by shifting the gaze toward the peripheral VF and the VFD area. Translational Relevance These findings provide new insights on gaze adaptations in patients with HVFDs during activities of daily living and will enhance the design and development of realistic examination tools for use in the clinical setting to improve daily functioning. (http://www.clinicaltrials.gov, NCT01372319, NCT01372332) PMID:25374771
Pasqualotti, Léa; Baccino, Thierry
2014-01-01
Most of studies about online advertisements have indicated that they have a negative impact on users' cognitive processes, especially when they include colorful or animated banners and when they are close to the text to be read. In the present study we assessed the effects of two advertisements features—distance from the text and the animation—on visual strategies during a word-search task and a reading-for-comprehension task using Web-like pages. We hypothesized that the closer the advertisement was to the target text, the more cognitive processing difficulties it would cause. We also hypothesized that (1) animated banners would be more disruptive than static advertisements and (2) banners would have more effect on word-search performance than reading-for-comprehension performance. We used an automatic classifier to assess variations in use of Scanning and Reading visual strategies during task performance. The results showed that the effect of dynamic and static advertisements on visual strategies varies according to the task. Fixation duration indicated that the closest advertisements slowed down information processing but there was no difference between the intermediate (40 pixel) and far (80 pixel) distance conditions. Our findings suggest that advertisements have a negative impact on users' performance mostly when a lots of cognitive resources are required as for reading-for-comprehension. PMID:24672501
Rosa, Pedro J; Gamito, Pedro; Oliveira, Jorge; Morais, Diogo; Pavlovic, Matthew; Smyth, Olivia; Maia, Inês; Gomes, Tiago
2017-03-23
An adequate behavioral response depends on attentional and mnesic processes. When these basic cognitive functions are impaired, the use of non-immersive Virtual Reality Applications (VRAs) can be a reliable technique for assessing the level of impairment. However, most non-immersive VRAs use indirect measures to make inferences about visual attention and mnesic processes (e.g., time to task completion, error rate). To examine whether the eye movement analysis through eye tracking (ET) can be a reliable method to probe more effectively where and how attention is deployed and how it is linked with visual working memory during comparative visual search tasks (CVSTs) in non-immersive VRAs. The eye movements of 50 healthy participants were continuously recorded while CVSTs, selected from a set of cognitive tasks in the Systemic Lisbon Battery (SLB). Then a VRA designed to assess of cognitive impairments were randomly presented. The total fixation duration, the number of visits in the areas of interest and in the interstimulus space, along with the total execution time was significantly different as a function of the Mini Mental State Examination (MMSE) scores. The present study demonstrates that CVSTs in SLB, when combined with ET, can be a reliable and unobtrusive method for assessing cognitive abilities in healthy individuals, opening it to potential use in clinical samples.
Gomarus, H Karin; Althaus, Monika; Wijers, Albertus A; Minderaa, Ruud B
2006-04-01
Psychophysiological correlates of selective attention and working memory were investigated in a group of 18 healthy children using a visually presented selective memory search task. Subjects had to memorize one (load1) or 3 (load3) letters (memory set) and search for these among a recognition set consisting of 4 letters only if the letters appeared in the correct (relevant) color. Event-related potentials (ERPs) as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS) were derived from the EEG that was recorded during the task. In the ERP to the memory set, a prolonged load-related positivity was found. In response to the recognition set, effects of relevance were manifested in an early frontal positivity and a later frontal negativity. Effects of load were found in a search-related negativity within the attended category and a suppression of the P3-amplitude. Theta ERS was most pronounced for the most difficult task condition during the recognition set, whereas alpha ERD showed a load-effect only during memorization. The manipulation of stimulus relevance and memory load affected both ERP components and ERD/ERS. The present paradigm may supply a useful method for studying processes of selective attention and working memory and can be used to examine group differences between healthy controls and children showing psychopathology.
Effects of age and eccentricity on visual target detection.
Gruber, Nicole; Müri, René M; Mosimann, Urs P; Bieri, Rahel; Aeschimann, Andrea; Zito, Giuseppe A; Urwyler, Prabitha; Nyffeler, Thomas; Nef, Tobias
2013-01-01
The aim of this study was to examine the effects of aging and target eccentricity on a visual search task comprising 30 images of everyday life projected into a hemisphere, realizing a ±90° visual field. The task performed binocularly allowed participants to freely move their eyes to scan images for an appearing target or distractor stimulus (presented at 10°; 30°, and 50° eccentricity). The distractor stimulus required no response, while the target stimulus required acknowledgment by pressing the response button. One hundred and seventeen healthy subjects (mean age = 49.63 years, SD = 17.40 years, age range 20-78 years) were studied. The results show that target detection performance decreases with age as well as with increasing eccentricity, especially for older subjects. Reaction time also increases with age and eccentricity, but in contrast to target detection, there is no interaction between age and eccentricity. Eye movement analysis showed that younger subjects exhibited a passive search strategy while older subjects exhibited an active search strategy probably as a compensation for their reduced peripheral detection performance.
Visual skills in airport-security screening.
McCarley, Jason S; Kramer, Arthur F; Wickens, Christopher D; Vidoni, Eric D; Boot, Walter R
2004-05-01
An experiment examined visual performance in a simulated luggage-screening task. Observers participated in five sessions of a task requiring them to search for knives hidden in x-ray images of cluttered bags. Sensitivity and response times improved reliably as a result of practice. Eye movement data revealed that sensitivity increases were produced entirely by changes in observers' ability to recognize target objects, and not by changes in the effectiveness of visual scanning. Moreover, recognition skills were in part stimulus-specific, such that performance was degraded by the introduction of unfamiliar target objects. Implications for screener training are discussed.
Simulating the Role of Visual Selective Attention during the Development of Perceptual Completion
ERIC Educational Resources Information Center
Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.
2012-01-01
We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of…
Attentional Allocation of Autism Spectrum Disorder Individuals: Searching for a Face-in-the-Crowd
ERIC Educational Resources Information Center
Moore, David J.; Reidy, John; Heavey, Lisa
2016-01-01
A study is reported which tests the proposition that faces capture the attention of those with autism spectrum disorders less than a typical population. A visual search task based on the Face-in-the-Crowd paradigm was used to examine the attentional allocation of autism spectrum disorder adults for faces. Participants were required to search for…
Sound effects: Multimodal input helps infants find displaced objects.
Shinskey, Jeanne L
2017-09-01
Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year. © 2016 The British Psychological Society.
Impact of Glaucoma and Dry Eye on Text-Based Searching
Sun, Michelle J.; Rubin, Gary S.; Akpek, Esen K.; Ramulu, Pradeep Y.
2017-01-01
Purpose We determine if visual field loss from glaucoma and/or measures of dry eye severity are associated with difficulty searching, as judged by slower search times on a text-based search task. Methods Glaucoma patients with bilateral visual field (VF) loss, patients with clinically significant dry eye, and normally-sighted controls were enrolled from the Wilmer Eye Institute clinics. Subjects searched three Yellow Pages excerpts for a specific phone number, and search time was recorded. Results A total of 50 glaucoma subjects, 40 dry eye subjects, and 45 controls completed study procedures. On average, glaucoma patients exhibited 57% longer search times compared to controls (95% confidence interval [CI], 26%–96%, P < 0.001), and longer search times were noted among subjects with greater VF loss (P < 0.001), worse contrast sensitivity (P < 0.001), and worse visual acuity (P = 0.026). Dry eye subjects demonstrated similar search times compared to controls, though worse Ocular Surface Disease Index (OSDI) vision-related subscores were associated with longer search times (P < 0.01). Search times showed no association with OSDI symptom subscores (P = 0.20) or objective measures of dry eye (P > 0.08 for Schirmer's testing without anesthesia, corneal fluorescein staining, and tear film breakup time). Conclusions Text-based visual search is slower for glaucoma patients with greater levels of VF loss and dry eye patients with greater self-reported visual difficulty, and these difficulties may contribute to decreased quality of life in these groups. Translational Relevance Visual search is impaired in glaucoma and dry eye groups compared to controls, highlighting the need for compensatory strategies and tools to assist individuals in overcoming their deficiencies. PMID:28670502
Anodal tDCS applied during multitasking training leads to transferable performance gains.
Filmer, Hannah L; Lyons, Maxwell; Mattingley, Jason B; Dux, Paul E
2017-10-11
Cognitive training can lead to performance improvements that are specific to the tasks trained. Recent research has suggested that transcranial direct current stimulation (tDCS) applied during training of a simple response-selection paradigm can broaden performance benefits to an untrained task. Here we assessed the impact of combined tDCS and training on multitasking, stimulus-response mapping specificity, response-inhibition, and spatial attention performance in a cohort of healthy adults. Participants trained over four days with concurrent tDCS - anodal, cathodal, or sham - applied to the left prefrontal cortex. Immediately prior to, 1 day after, and 2 weeks after training, performance was assessed on the trained multitasking paradigm, an untrained multitasking paradigm, a go/no-go inhibition task, and a visual search task. Training combined with anodal tDCS, compared with training plus cathodal or sham stimulation, enhanced performance for the untrained multitasking paradigm and visual search tasks. By contrast, there were no training benefits for the go/no-go task. Our findings demonstrate that anodal tDCS combined with multitasking training can extend to untrained multitasking paradigms as well as spatial attention, but with no extension to the domain of response inhibition.
Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching
2016-08-01
Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. The authors propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, the authors show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. The authors argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which used an easy visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching
2016-01-01
Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. We propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, we show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. We argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which employed an easy visual search. PMID:26854530
Reduced modulation of scanpaths in response to task demands in posterior cortical atrophy.
Shakespeare, Timothy J; Pertzov, Yoni; Yong, Keir X X; Nicholas, Jennifer; Crutch, Sebastian J
2015-02-01
A difficulty in perceiving visual scenes is one of the most striking impairments experienced by patients with the clinico-radiological syndrome posterior cortical atrophy (PCA). However whilst a number of studies have investigated perception of relatively simple experimental stimuli in these individuals, little is known about multiple object and complex scene perception and the role of eye movements in posterior cortical atrophy. We embrace the distinction between high-level (top-down) and low-level (bottom-up) influences upon scanning eye movements when looking at scenes. This distinction was inspired by Yarbus (1967), who demonstrated how the location of our fixations is affected by task instructions and not only the stimulus' low level properties. We therefore examined how scanning patterns are influenced by task instructions and low-level visual properties in 7 patients with posterior cortical atrophy, 8 patients with typical Alzheimer's disease, and 19 healthy age-matched controls. Each participant viewed 10 scenes under four task conditions (encoding, recognition, search and description) whilst eye movements were recorded. The results reveal significant differences between groups in the impact of test instructions upon scanpaths. Across tasks without a search component, posterior cortical atrophy patients were significantly less consistent than typical Alzheimer's disease patients and controls in where they were looking. By contrast, when comparing search and non-search tasks, it was controls who exhibited lowest between-task similarity ratings, suggesting they were better able than posterior cortical atrophy or typical Alzheimer's disease patients to respond appropriately to high-level needs by looking at task-relevant regions of a scene. Posterior cortical atrophy patients had a significant tendency to fixate upon more low-level salient parts of the scenes than controls irrespective of the viewing task. The study provides a detailed characterisation of scene perception abilities in posterior cortical atrophy and offers insights into the mechanisms by which high-level cognitive schemes interact with low-level perception. Copyright © 2015 Elsevier Ltd. All rights reserved.
Persistence of Value-Driven Attentional Capture
ERIC Educational Resources Information Center
Anderson, Brian A.; Yantis, Steven
2013-01-01
Stimuli that have previously been associated with the delivery of reward involuntarily capture attention when presented as unrewarded and task-irrelevant distractors in a subsequent visual search task. It is unknown how long such effects of reward learning on attention persist. One possibility is that value-driven attentional biases are plastic…
Automatic guidance of attention during real-world visual search
Seidl-Rathkopf, Katharina N.; Turk-Browne, Nicholas B.; Kastner, Sabine
2015-01-01
Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, as the features, locations, and times of appearance of relevant objects are often not known in advance. A mechanism by which attention is automatically biased toward information that is potentially relevant may thus be helpful. Here we tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of non-matching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897
Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G
2009-05-01
The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation.
Attention induced neural response trade-off in retinotopic cortex under load.
Torralbo, Ana; Kelley, Todd A; Rees, Geraint; Lavie, Nilli
2016-09-14
The effects of perceptual load on visual cortex response to distractors are well established and various phenomena of 'inattentional blindness' associated with elimination of visual cortex response to unattended distractors, have been documented in tasks of high load. Here we tested an account for these effects in terms of a load-induced trade-off between target and distractor processing in retinotopic visual cortex. Participants were scanned using fMRI while performing a visual-search task and ignoring distractor checkerboards in the periphery. Retinotopic responses to target and distractors were assessed as a function of search load (comparing search set-sizes two, three and five). We found that increased load not only increased activity in frontoparietal network, but also had opposite effects on retinotopic responses to target and distractors. Target-related signals in areas V2-V3 linearly increased, while distractor response linearly decreased, with increased load. Critically, the slopes were equivalent for both load functions, thus demonstrating resource trade-off. Load effects were also found in displays with the same item number in the distractor hemisphere across different set sizes, thus ruling out local intrahemispheric interactions as the cause. Our findings provide new evidence for load theory proposals of attention resource sharing between target and distractor leading to inattentional blindness.
Attention induced neural response trade-off in retinotopic cortex under load
Torralbo, Ana; Kelley, Todd A.; Rees, Geraint; Lavie, Nilli
2016-01-01
The effects of perceptual load on visual cortex response to distractors are well established and various phenomena of ‘inattentional blindness’ associated with elimination of visual cortex response to unattended distractors, have been documented in tasks of high load. Here we tested an account for these effects in terms of a load-induced trade-off between target and distractor processing in retinotopic visual cortex. Participants were scanned using fMRI while performing a visual-search task and ignoring distractor checkerboards in the periphery. Retinotopic responses to target and distractors were assessed as a function of search load (comparing search set-sizes two, three and five). We found that increased load not only increased activity in frontoparietal network, but also had opposite effects on retinotopic responses to target and distractors. Target-related signals in areas V2–V3 linearly increased, while distractor response linearly decreased, with increased load. Critically, the slopes were equivalent for both load functions, thus demonstrating resource trade-off. Load effects were also found in displays with the same item number in the distractor hemisphere across different set sizes, thus ruling out local intrahemispheric interactions as the cause. Our findings provide new evidence for load theory proposals of attention resource sharing between target and distractor leading to inattentional blindness. PMID:27625311
Implicit short- and long-term memory direct our gaze in visual search.
Kruijne, Wouter; Meeter, Martijn
2016-04-01
Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing.
Grubert, Anna; Eimer, Martin
2016-08-01
To study whether top-down attentional control processes can be set simultaneously for different visual features, we employed a spatial cueing procedure to measure behavioral and electrophysiological markers of task-set contingent attentional capture during search for targets defined by 1 or 2 possible colors (one-color and two-color tasks). Search arrays were preceded by spatially nonpredictive color singleton cues. Behavioral spatial cueing effects indicative of attentional capture were elicited only by target-matching but not by distractor-color cues. However, when search displays contained 1 target-color and 1 distractor-color object among gray nontargets, N2pc components were triggered not only by target-color but also by distractor-color cues both in the one-color and two-color task, demonstrating that task-set nonmatching items attracted attention. When search displays contained 6 items in 6 different colors, so that participants had to adopt a fully feature-specific task set, the N2pc to distractor-color cues was eliminated in both tasks, indicating that nonmatching items were now successfully excluded from attentional processing. These results demonstrate that when observers adopt a feature-specific search mode, attentional task sets can be configured flexibly for multiple features within the same dimension, resulting in the rapid allocation of attention to task-set matching objects only. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Murray, Nicholas P; Hunfalvay, Melissa
2017-02-01
Considerable research has documented that successful performance in interceptive tasks (such as return of serve in tennis) is based on the performers' capability to capture appropriate anticipatory information prior to the flight path of the approaching object. Athletes of higher skill tend to fixate on different locations in the playing environment prior to initiation of a skill than their lesser skilled counterparts. The purpose of this study was to examine visual search behaviour strategies of elite (world ranked) tennis players and non-ranked competitive tennis players (n = 43) utilising cluster analysis. The results of hierarchical (Ward's method) and nonhierarchical (k means) cluster analyses revealed three different clusters. The clustering method distinguished visual behaviour of high, middle-and low-ranked players. Specifically, high-ranked players demonstrated longer mean fixation duration and lower variation of visual search than middle-and low-ranked players. In conclusion, the results demonstrated that cluster analysis is a useful tool for detecting and analysing the areas of interest for use in experimental analysis of expertise and to distinguish visual search variables among participants'.
Neuronal basis of covert spatial attention in the frontal eye field.
Thompson, Kirk G; Biscoe, Keri L; Sato, Takashi R
2005-10-12
The influential "premotor theory of attention" proposes that developing oculomotor commands mediate covert visual spatial attention. A likely source of this attentional bias is the frontal eye field (FEF), an area of the frontal cortex involved in converting visual information into saccade commands. We investigated the link between FEF activity and covert spatial attention by recording from FEF visual and saccade-related neurons in monkeys performing covert visual search tasks without eye movements. Here we show that the source of attention signals in the FEF is enhanced activity of visually responsive neurons. At the time attention is allocated to the visual search target, nonvisually responsive saccade-related movement neurons are inhibited. Therefore, in the FEF, spatial attention signals are independent of explicit saccade command signals. We propose that spatially selective activity in FEF visually responsive neurons corresponds to the mental spotlight of attention via modulation of ongoing visual processing.
The Role of Search Speed in the Contextual Cueing of Children's Attention.
Darby, Kevin; Burling, Joseph; Yoshida, Hanako
2014-01-01
The contextual cueing effect is a robust phenomenon in which repeated exposure to the same arrangement of random elements guides attention to relevant information by constraining search. The effect is measured using an object search task in which a target (e.g., the letter T) is located within repeated or nonrepeated visual contexts (e.g., configurations of the letter L). Decreasing response times for the repeated configurations indicates that contextual information has facilitated search. Although the effect is robust among adult participants, recent attempts to document the effect in children have yielded mixed results. We examined the effect of search speed on contextual cueing with school-aged children, comparing three types of stimuli that promote different search times in order to observe how speed modulates this effect. Reliable effects of search time were found, suggesting that visual search speed uniquely constrains the role of attention toward contextually cued information.
The Role of Search Speed in the Contextual Cueing of Children’s Attention
Darby, Kevin; Burling, Joseph; Yoshida, Hanako
2013-01-01
The contextual cueing effect is a robust phenomenon in which repeated exposure to the same arrangement of random elements guides attention to relevant information by constraining search. The effect is measured using an object search task in which a target (e.g., the letter T) is located within repeated or nonrepeated visual contexts (e.g., configurations of the letter L). Decreasing response times for the repeated configurations indicates that contextual information has facilitated search. Although the effect is robust among adult participants, recent attempts to document the effect in children have yielded mixed results. We examined the effect of search speed on contextual cueing with school-aged children, comparing three types of stimuli that promote different search times in order to observe how speed modulates this effect. Reliable effects of search time were found, suggesting that visual search speed uniquely constrains the role of attention toward contextually cued information. PMID:24505167
A new cue to figure-ground coding: top-bottom polarity.
Hulleman, Johan; Humphreys, Glyn W
2004-11-01
We present evidence for a new figure-ground cue: top-bottom polarity. In an explicit reporting task, participants were more likely to interpret stimuli with a wide base and a narrow top as a figure. A similar advantage for wide-based stimuli also occurred in a visual short-term memory task, where the stimuli had ambiguous figure-ground relations. Further support comes from a figural search task. Figural search is a discrimination task in which participants are set to search for a symmetric target in a display with ambiguous figure-ground organization. We show that figural search was easier when stimuli with a top-bottom polarity were placed in an orientation where they had a wide base and a narrow top, relative to when this orientation was inverted. This polarity effect was present when participants were set to use color to parse figure from ground, and it was magnified when the participants did not have any foreknowledge of the color of the symmetric target. Taken together the results suggest that top-bottom polarity influences figure-ground assignment, with wide base stimuli being preferred as a figure. In addition, the figural search task can serve as a useful procedure to examine figure-ground assignment.
Neural field model of memory-guided search.
Kilpatrick, Zachary P; Poll, Daniel B
2017-12-01
Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.
Neural field model of memory-guided search
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.; Poll, Daniel B.
2017-12-01
Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.
Search performance is better predicted by tileability than presence of a unique basic feature.
Chang, Honghua; Rosenholtz, Ruth
2016-08-01
Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a "basic feature" not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search.
Search performance is better predicted by tileability than presence of a unique basic feature
Chang, Honghua; Rosenholtz, Ruth
2016-01-01
Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a “basic feature” not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search. PMID:27548090
ERIC Educational Resources Information Center
Roberson, Debi; Pak, Hyensou; Hanley, J. Richard
2008-01-01
In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is…
Zhang, Mingsha; Wang, Xiaolan; Goldberg, Michael E.
2014-01-01
We recorded the activity of neurons in the lateral intraparietal area of two monkeys while they performed two similar visual search tasks, one difficult, one easy. Each task began with a period of fixation followed by an array consisting of a single capital T and a number of lowercase t’s. The monkey had to find the capital T and report its orientation, upright or inverted, with a hand movement. In the easy task the monkey could explore the array with saccades. In the difficult task the monkey had to continue fixating and find the capital T in the visual periphery. The baseline activity measured during the fixation period, at a time in which the monkey could not know if the impending task would be difficult or easy or where the target would appear, predicted the monkey’s probability of success or failure on the task. The baseline activity correlated inversely with the monkey's recent history of success and directly with the intensity of the response to the search array on the current trial. The baseline activity was unrelated to the monkey’s spatial locus of attention as determined by the location of the cue in a cued visual reaction time task. We suggest that rather than merely reflecting the noise in the system, the baseline signal reflects the cortical manifestation of modulatory state, motivational, or arousal pathways, which determine the efficiency of cortical sensorimotor processing and the quality of the monkey’s performance. PMID:24889623
Postural Change Effects on Infants' AB Task Performance: Visual, Postural, or Spatial?
ERIC Educational Resources Information Center
Lew, Adina R.; Hopkins, Brian; Owen, Laura H.; Green, Michael
2007-01-01
Smith and colleagues (Smith, L. B., Thelen, E., Titzer, R., & McLin, D. (1999). Knowing in the context of acting: The task dynamics of the A-not-B error. "Psychological Review, 106," 235-260) demonstrated that 10-month-olds succeed on a Piagetian AB search task if they are moved from a sitting position to a standing position between A and B…
Training eye movements for visual search in individuals with macular degeneration
Janssen, Christian P.; Verghese, Preeti
2016-01-01
We report a method to train individuals with central field loss due to macular degeneration to improve the efficiency of visual search. Our method requires participants to make a same/different judgment on two simple silhouettes. One silhouette is presented in an area that falls within the binocular scotoma while they are fixating the center of the screen with their preferred retinal locus (PRL); the other silhouette is presented diametrically opposite within the intact visual field. Over the course of 480 trials (approximately 6 hr), we gradually reduced the amount of time that participants have to make a saccade and judge the similarity of stimuli. This requires that they direct their PRL first toward the stimulus that is initially hidden behind the scotoma. Results from nine participants show that all participants could complete the task faster with training without sacrificing accuracy on the same/different judgment task. Although a majority of participants were able to direct their PRL toward the initially hidden stimulus, the ability to do so varied between participants. Specifically, six of nine participants made faster saccades with training. A smaller set (four of nine) made accurate saccades inside or close to the target area and retained this strategy 2 to 3 months after training. Subjective reports suggest that training increased awareness of the scotoma location for some individuals. However, training did not transfer to a different visual search task. Nevertheless, our study suggests that increasing scotoma awareness and training participants to look toward their scotoma may help them acquire missing information. PMID:28027382
Selection history alters attentional filter settings persistently and beyond top-down control.
Kadel, Hanna; Feldmann-Wüstefeld, Tobias; Schubö, Anna
2017-05-01
Visual selective attention is known to be guided by stimulus-based (bottom-up) and goal-oriented (top-down) control mechanisms. Recent work has pointed out that selection history (i.e., the bias to prioritize items that have been previously attended) can result in a learning experience that also has a substantial impact on subsequent attention guidance. The present study examined to what extent goal-oriented top-down control mechanisms interact with an observer's individual selection history in guiding attention. Selection history was manipulated in a categorization task in a between-subjects design, where participants learned that either color or shape was the response-relevant dimension. The impact of this experience was assessed in a compound visual search task with an additional color distractor. Top-down preparation for each search trial was enabled by a pretrial task cue (Experiment 1) or a fixed, predictable trial sequence (Experiment 2). Reaction times and ERPs served as indicators of attention deployment. Results showed that attention was captured by the color distractor when participants had learned that color predicted the correct response in the categorization learning task, suggesting that a bias for predictive stimulus features had developed. The possibility to prepare for the search task reduced the bias, but could not entirely overrule this selection history effect. In Experiment 3, both tasks were performed in separate sessions, and the bias still persisted. These results indicate that selection history considerably shapes selective attention and continues to do so persistently even when the task allowed for high top-down control. © 2017 Society for Psychophysiological Research.
McElree, Brian; Carrasco, Marisa
2012-01-01
Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310
Electrophysiological evidence for parallel and serial processing during visual search.
Luck, S J; Hillyard, S A
1990-12-01
Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.
Interactive Tools for Measuring Visual Scanning Performance and Reaction Time
Seeanner, Julia; Hennessy, Sarah; Manganelli, Joseph; Crisler, Matthew; Rosopa, Patrick; Jenkins, Casey; Anderson, Michael; Drouin, Nathalie; Belle, Leah; Truesdail, Constance; Tanner, Stephanie
2017-01-01
Occupational therapists are constantly searching for engaging, high-technology interactive tasks that provide immediate feedback to evaluate and train clients with visual scanning deficits. This study examined the relationship between two tools: the VISION COACH™ interactive light board and the Functional Object Detection© (FOD) Advanced driving simulator scenario. Fifty-four healthy drivers, ages 21–66 yr, were divided into three age groups. Participants performed braking response and visual target (E) detection tasks of the FOD Advanced driving scenario, followed by two sets of three trials using the VISION COACH Full Field 60 task. Results showed no significant effect of age on FOD Advanced performance but a significant effect of age on VISION COACH performance. Correlations showed that participants’ performance on both braking and E detection tasks were significantly positively correlated with performance on the VISION COACH (.37 < r < .40, p < .01). These tools provide new options for therapists. PMID:28218598
Wilkinson, Krista M.; McIlvane, William J.
2013-01-01
Augmentative and alternative communication (AAC) systems often supplement oral communication of individuals with intellectual and communication disabilities. Research with nondisabled preschoolers has demonstrated that two visual perceptual factors influence speed and/or accuracy of finding a target - the internal color and spatial organization of symbols. Twelve participants with Down syndrome and 12 with ASD underwent two search tasks. In one, the symbols were clustered by internal color; in the other the identical symbols had no arrangement cue. Visual search was superior in participants with ASD compared to those with Down syndrome. In both groups, responses were significantly faster when the symbols were clustered by internal color. Construction of aided AAC displays may benefit from attention to their physical/perceptual features. PMID:24245729
Gaze Patterns of Gross Anatomy Students Change with Classroom Learning
ERIC Educational Resources Information Center
Zumwalt, Ann C.; Iyer, Arjun; Ghebremichael, Abenet; Frustace, Bruno S.; Flannery, Sean
2015-01-01
Numerous studies have documented that experts exhibit more efficient gaze patterns than those of less experienced individuals. In visual search tasks, experts use fewer, longer fixations to fixate for relatively longer on salient regions of the visual field while less experienced observers spend more time examining nonsalient regions. This study…
The Mechanisms Underlying the ASD Advantage in Visual Search
ERIC Educational Resources Information Center
Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S.; Blaser, Erik
2016-01-01
A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in "Neuron" 48:497-507, 2005; Simmons et al. in "Vis Res" 49:2705-2739, 2009). This…
ERIC Educational Resources Information Center
Snow, Richard E.; And Others
This pilot study investigated some relationships between tested ability variables and processing parameters obtained from memory search and visual search tasks. The 25 undergraduates who participated had also participated in a previous investigation by Chiang and Atkinson. A battery of traditional ability tests and several film tests were…
Irrelevant Singletons in Pop-Out Search: Attentional Capture or Filtering Costs?
ERIC Educational Resources Information Center
Becker, Stefanie I.
2007-01-01
The aim of the present study was to investigate whether costs invoked by the presence of an irrelevant singleton distractor in a visual search task are due to attentional capture by the irrelevant singleton or spatially unrelated filtering costs. Measures of spatial effects were based on distance effects, compatibility effects, and differences…
ERIC Educational Resources Information Center
Betsch, Tilmann; Wünsche, Kirsten; Großkopf, Armin; Schröder, Klara; Stenmans, Rachel
2018-01-01
Prior evidence has suggested that preschoolers and elementary schoolers search information largely with no systematic plan when making decisions in probabilistic environments. However, this finding might be due to the insensitivity of standard classification methods that assume a lack of variance in decision strategies for tasks of the same kind.…
Kourkoulou, Anastasia; Kuhn, Gustav; Findlay, John M; Leekam, Susan R
2013-06-01
It is widely accepted that we use contextual information to guide our gaze when searching for an object. People with autism spectrum disorder (ASD) also utilise contextual information in this way; yet, their visual search in tasks of this kind is much slower compared with people without ASD. The aim of the current study was to explore the reason for this by measuring eye movements. Eye movement analyses revealed that the slowing of visual search was not caused by making a greater number of fixations. Instead, participants in the ASD group were slower to launch their first saccade, and the duration of their fixations was longer. These results indicate that slowed search in ASD in contextual learning tasks is not due to differences in the spatial allocation of attention but due to temporal delays in the initial-reflexive orienting of attention and subsequent-focused attention. These results have broader implications for understanding the unusual attention profile of individuals with ASD and how their attention may be shaped by learning. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.
Brailsford, Richard; Catherwood, Di; Tyson, Philip J; Edgar, Graham
2014-01-01
Attentional biases in anxiety disorders have been assessed primarily using three types of experiment: the emotional Stroop task, the probe-detection task, and variations of the visual search task. It is proposed that the inattentional blindness procedure has the ability to overcome limitations of these paradigms in regard to identifying the components of attentional bias. Three experiments examined attentional responding to spider images in individuals with low and moderate to high spider fear. The results demonstrate that spider fear causes a bias in the engage component of visual attention and this is specific to stimuli presented in the left visual field (i.e., to the right hemisphere). The implications of the results are discussed and recommendations for future research are made.
Diversification of visual media retrieval results using saliency detection
NASA Astrophysics Data System (ADS)
Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.
2013-03-01
Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.
Visual search and autism symptoms: What young children search for and co-occurring ADHD matter.
Doherty, Brianna R; Charman, Tony; Johnson, Mark H; Scerif, Gaia; Gliga, Teodora
2018-05-03
Superior visual search is one of the most common findings in the autism spectrum disorder (ASD) literature. Here, we ascertain how generalizable these findings are across task and participant characteristics, in light of recent replication failures. We tested 106 3-year-old children at familial risk for ASD, a sample that presents high ASD and ADHD symptoms, and 25 control participants, in three multi-target search conditions: easy exemplar search (look for cats amongst artefacts), difficult exemplar search (look for dogs amongst chairs/tables perceptually similar to dogs), and categorical search (look for animals amongst artefacts). Performance was related to dimensional measures of ASD and ADHD, in agreement with current research domain criteria (RDoC). We found that ASD symptom severity did not associate with enhanced performance in search, but did associate with poorer categorical search in particular, consistent with literature describing impairments in categorical knowledge in ASD. Furthermore, ASD and ADHD symptoms were both associated with more disorganized search paths across all conditions. Thus, ASD traits do not always convey an advantage in visual search; on the contrary, ASD traits may be associated with difficulties in search depending upon the nature of the stimuli (e.g., exemplar vs. categorical search) and the presence of co-occurring symptoms. © 2018 John Wiley & Sons Ltd.
Visual attention shifting in autism spectrum disorders.
Richard, Annette E; Lajiness-O'Neill, Renee
2015-01-01
Abnormal visual attention has been frequently observed in autism spectrum disorders (ASD). Abnormal shifting of visual attention is related to abnormal development of social cognition and has been identified as a key neuropsychological finding in ASD. Better characterizing attention shifting in ASD and its relationship with social functioning may help to identify new targets for intervention and improving social communication in these disorders. Thus, the current study investigated deficits in attention shifting in ASD as well as relationships between attention shifting and social communication in ASD and neurotypicals (NT). To investigate deficits in visual attention shifting in ASD, 20 ASD and 20 age- and gender-matched NT completed visual search (VS) and Navon tasks with attention-shifting demands as well as a set-shifting task. VS was a feature search task with targets defined in one of two dimensions; Navon required identification of a target letter presented at the global or local level. Psychomotor and processing speed were entered as covariates. Relationships between visual attention shifting, set shifting, and social functioning were also examined. ASD and NT showed comparable costs of shifting attention. However, psychomotor and processing speed were slower in ASD than in NT, and psychomotor and processing speed were positively correlated with attention-shifting costs on Navon and VS, respectively, for both groups. Attention shifting on VS and Navon were correlated among NT, while attention shifting on Navon was correlated with set shifting among ASD. Attention-shifting costs on Navon were positively correlated with restricted and repetitive behaviors among ASD. Relationships between attention shifting and psychomotor and processing speed, as well as relationships between measures of different aspects of visual attention shifting, suggest inefficient top-down influences over preattentive visual processing in ASD. Inefficient attention shifting may be related to restricted and repetitive behaviors in these disorders.
Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien
2018-01-11
Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.
Parallel Distractor Rejection as a Binding Mechanism in Search
Dent, Kevin; Allen, Harriet A.; Braithwaite, Jason J.; Humphreys, Glyn W.
2012-01-01
The relatively common experimental visual search task of finding a red X amongst red O’s and green X’s (conjunction search) presents the visual system with a binding problem. Illusory conjunctions (ICs) of features across objects must be avoided and only features present in the same object bound together. Correct binding into unique objects by the visual system may be promoted, and ICs minimized, by inhibiting the locations of distractors possessing non-target features (e.g., Treisman and Sato, 1990). Such parallel rejection of interfering distractors leaves the target as the only item competing for selection; thus solving the binding problem. In the present article we explore the theoretical and empirical basis of this process of active distractor inhibition in search. Specific experiments that provide strong evidence for a process of active distractor inhibition in search are highlighted. In the final part of the article we consider how distractor inhibition, as defined here, may be realized at a neurophysiological level (Treisman and Sato, 1990). PMID:22908002
Eye movements during information processing tasks: individual differences and cultural effects.
Rayner, Keith; Li, Xingshan; Williams, Carrick C; Cave, Kyle R; Well, Arnold D
2007-09-01
The eye movements of native English speakers, native Chinese speakers, and bilingual Chinese/English speakers who were either born in China (and moved to the US at an early age) or in the US were recorded during six tasks: (1) reading, (2) face processing, (3) scene perception, (4) visual search, (5) counting Chinese characters in a passage of text, and (6) visual search for Chinese characters. Across the different groups, there was a strong tendency for consistency in eye movement behavior; if fixation durations of a given viewer were long on one task, they tended to be long on other tasks (and the same tended to be true for saccade size). Some tasks, notably reading, did not conform to this pattern. Furthermore, experience with a given writing system had a large impact on fixation durations and saccade lengths. With respect to cultural differences, there was little evidence that Chinese participants spent more time looking at the background information (and, conversely less time looking at the foreground information) than the American participants. Also, Chinese participants' fixations were more numerous and of shorter duration than those of their American counterparts while viewing faces and scenes, and counting Chinese characters in text.
Comparing two types of engineering visualizations: task-related manipulations matter.
Cölln, Martin C; Kusch, Kerstin; Helmert, Jens R; Kohler, Petra; Velichkovsky, Boris M; Pannasch, Sebastian
2012-01-01
This study focuses on the comparison of traditional engineering drawings with a CAD (computer aided design) visualization in terms of user performance and eye movements in an applied context. Twenty-five students of mechanical engineering completed search tasks for measures in two distinct depictions of a car engine component (engineering drawing vs. CAD model). Besides spatial dimensionality, the display types most notably differed in terms of information layout, access and interaction options. The CAD visualization yielded better performance, if users directly manipulated the object, but was inferior, if employed in a conventional static manner, i.e. inspecting only predefined views. An additional eye movement analysis revealed longer fixation durations and a stronger increase of task-relevant fixations over time when interacting with the CAD visualization. This suggests a more focused extraction and filtering of information. We conclude that the three-dimensional CAD visualization can be advantageous if its ability to manipulate is used. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Modulation of spatial attention by goals, statistical learning, and monetary reward.
Jiang, Yuhong V; Sha, Li Z; Remington, Roger W
2015-10-01
This study documented the relative strength of task goals, visual statistical learning, and monetary reward in guiding spatial attention. Using a difficult T-among-L search task, we cued spatial attention to one visual quadrant by (i) instructing people to prioritize it (goal-driven attention), (ii) placing the target frequently there (location probability learning), or (iii) associating that quadrant with greater monetary gain (reward-based attention). Results showed that successful goal-driven attention exerted the strongest influence on search RT. Incidental location probability learning yielded a smaller though still robust effect. Incidental reward learning produced negligible guidance for spatial attention. The 95 % confidence intervals of the three effects were largely nonoverlapping. To understand these results, we simulated the role of location repetition priming in probability cuing and reward learning. Repetition priming underestimated the strength of location probability cuing, suggesting that probability cuing involved long-term statistical learning of how to shift attention. Repetition priming provided a reasonable account for the negligible effect of reward on spatial attention. We propose a multiple-systems view of spatial attention that includes task goals, search habit, and priming as primary drivers of top-down attention.
Modulation of spatial attention by goals, statistical learning, and monetary reward
Sha, Li Z.; Remington, Roger W.
2015-01-01
This study documented the relative strength of task goals, visual statistical learning, and monetary reward in guiding spatial attention. Using a difficult T-among-L search task, we cued spatial attention to one visual quadrant by (i) instructing people to prioritize it (goal-driven attention), (ii) placing the target frequently there (location probability learning), or (iii) associating that quadrant with greater monetary gain (reward-based attention). Results showed that successful goal-driven attention exerted the strongest influence on search RT. Incidental location probability learning yielded a smaller though still robust effect. Incidental reward learning produced negligible guidance for spatial attention. The 95 % confidence intervals of the three effects were largely nonoverlapping. To understand these results, we simulated the role of location repetition priming in probability cuing and reward learning. Repetition priming underestimated the strength of location probability cuing, suggesting that probability cuing involved long-term statistical learning of how to shift attention. Repetition priming provided a reasonable account for the negligible effect of reward on spatial attention. We propose a multiple-systems view of spatial attention that includes task goals, search habit, and priming as primary drivers of top-down attention. PMID:26105657
Use of an augmented-vision device for visual search by patients with tunnel vision.
Luo, Gang; Peli, Eli
2006-09-01
To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VFs) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF, 8 degrees -11 degrees wide) carried out the search over a 90 degrees x 74 degrees area, and nine subjects (VF, 7 degrees -16 degrees wide) carried out the search over a 66 degrees x 52 degrees area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in the larger and the smaller area searches. When using the device, a significant reduction in search time (28% approximately 74%) was demonstrated by all three subjects in the larger area search and by subjects with VFs wider than 10 degrees in the smaller area search (average, 22%). Directness and gaze speed accounted for 90% of the variability of search time. Although performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. Because improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks.
Investigating the role of the superior colliculus in active vision with the visual search paradigm.
Shen, Kelly; Valero, Jerome; Day, Gregory S; Paré, Martin
2011-06-01
We review here both the evidence that the functional visuomotor organization of the optic tectum is conserved in the primate superior colliculus (SC) and the evidence for the linking proposition that SC discriminating activity instantiates saccade target selection. We also present new data in response to questions that arose from recent SC visual search studies. First, we observed that SC discriminating activity predicts saccade initiation when monkeys perform an unconstrained search for a target defined by either a single visual feature or a conjunction of two features. Quantitative differences between the results in these two search tasks suggest, however, that SC discriminating activity does not only reflect saccade programming. This finding concurs with visual search studies conducted in posterior parietal cortex and the idea that, during natural active vision, visual attention is shifted concomitantly with saccade programming. Second, the analysis of a large neuronal sample recorded during feature search revealed that visual neurons in the superficial layers do possess discriminating activity. In addition, the hypotheses that there are distinct types of SC neurons in the deeper layers and that they are differently involved in saccade target selection were not substantiated. Third, we found that the discriminating quality of single-neuron activity substantially surpasses the ability of the monkeys to discriminate the target from distracters, raising the possibility that saccade target selection is a noisy process. We discuss these new findings in light of the visual search literature and the view that the SC is a visual salience map for orienting eye movements. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Cuperlier, Nicolas; Gaussier, Philippe
2017-01-01
Emotions play a significant role in internal regulatory processes. In this paper, we advocate four key ideas. First, novelty detection can be grounded in the sensorimotor experience and allow higher order appraisal. Second, cognitive processes, such as those involved in self-assessment, influence emotional states by eliciting affects like boredom and frustration. Third, emotional processes such as those triggered by self-assessment influence attentional processes. Last, close emotion-cognition interactions implement an efficient feedback loop for the purpose of top-down behavior regulation. The latter is what we call ‘Emotional Metacontrol’. We introduce a model based on artificial neural networks. This architecture is used to control a robotic system in a visual search task. The emotional metacontrol intervenes to bias the robot visual attention during active object recognition. Through a behavioral and statistical analysis, we show that this mechanism increases the robot performance and fosters the exploratory behavior to avoid deadlocks. PMID:28934291
Manelis, Anna; Reder, Lynne M
2012-10-16
Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate the target. A psychophysiological interactions analysis for repeated configurations revealed that a strong functional connectivity between this area in the right hippocampus and the left superior parietal lobule early in learning was significantly reduced toward the end of the task. Practice related changes (which we call "procedural learning") in activation in temporo-occipital and parietal brain regions depended on whether or not spatial context was repeated. We conclude that context repetition facilitates visual search through chunk formation that reduces the number of effective distractors that have to be processed during the search. Context repetition influences procedural learning in a way that allows for continuous and effective chunk updating.
Spatial context learning survives interference from working memory load
Vickery, Timothy J.; Sussman, Rachel S.; Jiang, Yuhong V.
2010-01-01
The human visual system is constantly confronted with an overwhelming amount of information, only a subset of which can be processed in complete detail. Attention and implicit learning are two important mechanisms that optimize vision. This study addresses the relationship between these two mechanisms. Specifically we ask: Is implicit learning of spatial context affected by the amount of working memory load devoted to an irrelevant task? We tested observers in visual search tasks where search displays occasionally repeated. Observers became faster searching repeated displays than unrepeated ones, showing contextual cueing. We found that the size of contextual cueing was unaffected by whether observers learned repeated displays under unitary attention or when their attention was divided using working memory manipulations. These results held when working memory was loaded by colors, dot patterns, individual dot locations, or multiple potential targets. We conclude that spatial context learning is robust to interference from manipulations that limit the availability of attention and working memory. PMID:20853996
Manelis, Anna; Reder, Lynne M.
2012-01-01
Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate the target. A psychophysiological interactions analysis for repeated configurations revealed that a strong functional connectivity between this area in the right hippocampus and the left superior parietal lobule early in learning was significantly reduced toward the end of the task. Practice related changes (which we call “procedural learning”) in activation in temporo-occipital and parietal brain regions depended on whether or not spatial context was repeated. We conclude that context repetition facilitates visual search through chunk formation that reduces the number of effective distractors that have to be processed during the search. Context repetition influences procedural learning in a way that allows for continuous and effective chunk updating. PMID:23073642
Age Mediation of Frontoparietal Activation during Visual Feature Search
Madden, David J.; Parks, Emily L.; Davis, Simon W.; Diaz, Michele T.; Potter, Guy G.; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto
2014-01-01
Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19 – 29 years of age) and 21 older adults (60 – 87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. PMID:25102420
Starke, Sandra D; Baber, Chris
2018-07-01
User interface (UI) design can affect the quality of decision making, where decisions based on digitally presented content are commonly informed by visually sampling information through eye movements. Analysis of the resulting scan patterns - the order in which people visually attend to different regions of interest (ROIs) - gives an insight into information foraging strategies. In this study, we quantified scan pattern characteristics for participants engaging with conceptually different user interface designs. Four interfaces were modified along two dimensions relating to effort in accessing information: data presentation (either alpha-numerical data or colour blocks), and information access time (all information sources readily available or sequential revealing of information required). The aim of the study was to investigate whether a) people develop repeatable scan patterns and b) different UI concepts affect information foraging and task performance. Thirty-two participants (eight for each UI concept) were given the task to correctly classify 100 credit card transactions as normal or fraudulent based on nine transaction attributes. Attributes varied in their usefulness of predicting the correct outcome. Conventional and more recent (network analysis- and bioinformatics-based) eye tracking metrics were used to quantify visual search. Empirical findings were evaluated in context of random data and possible accuracy for theoretical decision making strategies. Results showed short repeating sequence fragments within longer scan patterns across participants and conditions, comprising a systematic and a random search component. The UI design concept showing alpha-numerical data in full view resulted in most complete data foraging, while the design concept showing colour blocks in full view resulted in the fastest task completion time. Decision accuracy was not significantly affected by UI design. Theoretical calculations showed that the difference in achievable accuracy between very complex and simple decision making strategies was small. We conclude that goal-directed search of familiar information results in repeatable scan pattern fragments (often corresponding to information sources considered particularly important), but no repeatable complete scan pattern. The underlying concept of the UI affects how visual search is performed, and a decision making strategy develops. This should be taken in consideration when designing for applied domains. Copyright © 2018 Elsevier Ltd. All rights reserved.
1992-01-01
Oxford University Press, NY, 1984. 24. Neisser , U ., Cognitive Psychology; Chap. 2, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1967. 25. Enoch, J.M...in this study. (ýRA•,’&l AI IC TA U J Ia oiic ti .y ......... .......... ........ DTIC ~ZTL7 c ~T~DIBy Distribu fo. I "". ’.1’...•’.. - , . 1 ii •0... u -der low-ambient lighting conditions, visual search inside the cockpit on a CRT monitor mounted in the instrument panel is not disrupted by laser
Visual short-term memory load strengthens selective attention.
Roper, Zachary J J; Vecera, Shaun P
2014-04-01
Perceptual load theory accounts for many attentional phenomena; however, its mechanism remains elusive because it invokes underspecified attentional resources. Recent dual-task evidence has revealed that a concurrent visual short-term memory (VSTM) load slows visual search and reduces contrast sensitivity, but it is unknown whether a VSTM load also constricts attention in a canonical perceptual load task. If attentional selection draws upon VSTM resources, then distraction effects-which measure attentional "spill-over"-will be reduced as competition for resources increases. Observers performed a low perceptual load flanker task during the delay period of a VSTM change detection task. We observed a reduction of the flanker effect in the perceptual load task as a function of increasing concurrent VSTM load. These findings were not due to perceptual-level interactions between the physical displays of the two tasks. Our findings suggest that perceptual representations of distractor stimuli compete with the maintenance of visual representations held in memory. We conclude that access to VSTM determines the degree of attentional selectivity; when VSTM is not completely taxed, it is more likely for task-irrelevant items to be consolidated and, consequently, affect responses. The "resources" hypothesized by load theory are at least partly mnemonic in nature, due to the strong correspondence they share with VSTM capacity.
Common neural substrates for visual working memory and attention.
Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J
2007-06-01
Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.
The effects of the dopamine agonist rotigotine on hemispatial neglect following stroke.
Gorgoraptis, Nikos; Mah, Yee-Haur; Machner, Bjoern; Singh-Curry, Victoria; Malhotra, Paresh; Hadji-Michael, Maria; Cohen, David; Simister, Robert; Nair, Ajoy; Kulinskaya, Elena; Ward, Nick; Greenwood, Richard; Husain, Masud
2012-08-01
Hemispatial neglect following right-hemisphere stroke is a common and disabling disorder, for which there is currently no effective pharmacological treatment. Dopamine agonists have been shown to play a role in selective attention and working memory, two core cognitive components of neglect. Here, we investigated whether the dopamine agonist rotigotine would have a beneficial effect on hemispatial neglect in stroke patients. A double-blind, randomized, placebo-controlled ABA design was used, in which each patient was assessed for 20 testing sessions, in three phases: pretreatment (Phase A1), on transdermal rotigotine for 7-11 days (Phase B) and post-treatment (Phase A2), with the exact duration of each phase randomized within limits. Outcome measures included performance on cancellation (visual search), line bisection, visual working memory, selective attention and sustained attention tasks, as well as measures of motor control. Sixteen right-hemisphere stroke patients were recruited, all of whom completed the trial. Performance on the Mesulam shape cancellation task improved significantly while on rotigotine, with the number of targets found on the left side increasing by 12.8% (P = 0.012) on treatment and spatial bias reducing by 8.1% (P = 0.016). This improvement in visual search was associated with an enhancement in selective attention but not on our measures of working memory or sustained attention. The positive effect of rotigotine on visual search was not associated with the degree of preservation of prefrontal cortex and occurred even in patients with significant prefrontal involvement. Rotigotine was not associated with any significant improvement in motor performance. This proof-of-concept study suggests a beneficial role of dopaminergic modulation on visual search and selective attention in patients with hemispatial neglect following stroke.
The effects of the dopamine agonist rotigotine on hemispatial neglect following stroke
Gorgoraptis, Nikos; Mah, Yee-Haur; Machner, Bjoern; Singh-Curry, Victoria; Malhotra, Paresh; Hadji-Michael, Maria; Cohen, David; Simister, Robert; Nair, Ajoy; Kulinskaya, Elena; Ward, Nick; Greenwood, Richard
2012-01-01
Hemispatial neglect following right-hemisphere stroke is a common and disabling disorder, for which there is currently no effective pharmacological treatment. Dopamine agonists have been shown to play a role in selective attention and working memory, two core cognitive components of neglect. Here, we investigated whether the dopamine agonist rotigotine would have a beneficial effect on hemispatial neglect in stroke patients. A double-blind, randomized, placebo-controlled ABA design was used, in which each patient was assessed for 20 testing sessions, in three phases: pretreatment (Phase A1), on transdermal rotigotine for 7–11 days (Phase B) and post-treatment (Phase A2), with the exact duration of each phase randomized within limits. Outcome measures included performance on cancellation (visual search), line bisection, visual working memory, selective attention and sustained attention tasks, as well as measures of motor control. Sixteen right-hemisphere stroke patients were recruited, all of whom completed the trial. Performance on the Mesulam shape cancellation task improved significantly while on rotigotine, with the number of targets found on the left side increasing by 12.8% (P = 0.012) on treatment and spatial bias reducing by 8.1% (P = 0.016). This improvement in visual search was associated with an enhancement in selective attention but not on our measures of working memory or sustained attention. The positive effect of rotigotine on visual search was not associated with the degree of preservation of prefrontal cortex and occurred even in patients with significant prefrontal involvement. Rotigotine was not associated with any significant improvement in motor performance. This proof-of-concept study suggests a beneficial role of dopaminergic modulation on visual search and selective attention in patients with hemispatial neglect following stroke. PMID:22761293
Ball, Keira; Lane, Alison R; Smith, Daniel T; Ellison, Amanda
2013-11-01
The right posterior parietal cortex (rPPC) and the right frontal eye field (rFEF) form part of a network of brain areas involved in orienting spatial attention. Previous studies using transcranial magnetic stimulation (TMS) have demonstrated that both areas are critically involved in the processing of conjunction visual search tasks, since stimulation of these sites disrupts performance. This study investigated the effects of long term neuronal modulation to rPPC and rFEF using transcranial direct current stimulation (tDCS) with the aim of uncovering sharing of these resources in the processing of conjunction visual search tasks. Participants completed four blocks of conjunction search trials over the course of 45 min. Following the first block they received 15 min of either cathodal or anodal stimulation to rPPC or rFEF, or sham stimulation. A significant interaction between block and stimulation condition was found, indicating that tDCS caused different effects according to the site (rPPC or rFEF) and type of stimulation (cathodal, anodal, or sham). Practice resulted in a significant reduction in reaction time across the four blocks in all conditions except when cathodal tDCS was applied to rPPC. The effects of cathodal tDCS over rPPC are subtler than those seen with TMS, and no effect of tDCS was evident at rFEF. This suggests that rFEF has a more transient role than rPPC in the processing of conjunction visual search and is robust to longer-term methods of neuro-disruption. Our results may be explained within the framework of functional connectivity between these, and other, areas. Copyright © 2013 Elsevier Inc. All rights reserved.
Hemispheric differences in visual search of simple line arrays.
Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W
1990-01-01
The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.
An integrative, experience-based theory of attentional control.
Wilder, Matthew H; Mozer, Michael C; Wickens, Christopher D
2011-02-09
Although diverse, theories of visual attention generally share the notion that attention is controlled by some combination of three distinct strategies: (1) exogenous cuing from locally contrasting primitive visual features, such as abrupt onsets or color singletons (e.g., L. Itti, C. Koch, & E. Neiber, 1998), (2) endogenous gain modulation of exogenous activations, used to guide attention to task-relevant features (e.g., V. Navalpakkam & L. Itti, 2007; J. Wolfe, 1994, 2007), and (3) endogenous prediction of likely locations of interest, based on task and scene gist (e.g., A. Torralba, A. Oliva, M. Castelhano, & J. Henderson, 2006). However, little work has been done to synthesize these disparate theories. In this work, we propose a unifying conceptualization in which attention is controlled along two dimensions: the degree of task focus and the contextual scale of operation. Previously proposed strategies-and their combinations-can be viewed as instances of this one mechanism. Thus, this theory serves not as a replacement for existing models but as a means of bringing them into a coherent framework. We present an implementation of this theory and demonstrate its applicability to a wide range of attentional phenomena. The model accounts for key results in visual search with synthetic images and makes reasonable predictions for human eye movements in search tasks involving real-world images. In addition, the theory offers an unusual perspective on attention that places a fundamental emphasis on the role of experience and task-related knowledge.
Towards A Complete Model Of Photopic Visual Threshold Performance
NASA Astrophysics Data System (ADS)
Overington, I.
1982-02-01
Based on a wide variety of fragmentary evidence taken from psycho-physics, neurophysiology and electron microscopy, it has been possible to put together a very widely applicable conceptual model of photopic visual threshold performance. Such a model is so complex that a single comprehensive mathematical version is excessively cumbersome. It is, however, possible to set up a suite of related mathematical models, each of limited application but strictly known envelope of usage. Such models may be used for assessment of a variety of facets of visual performance when using display imagery, including effects and interactions of image quality, random and discrete display noise, viewing distance, image motion, etc., both for foveal interrogation tasks and for visual search tasks. The specific model may be selected from the suite according to the assessment task in hand. The paper discusses in some depth the major facets of preperceptual visual processing and their interaction with instrumental image quality and noise. It then highlights the statistical nature of visual performance before going on to consider a number of specific mathematical models of partial visual function. Where appropriate, these are compared with widely popular empirical models of visual function.
Object-based attention underlies the rehearsal of feature binding in visual working memory.
Shen, Mowei; Huang, Xiang; Gao, Zaifeng
2015-04-01
Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.
fMRI of Parents of Children with Asperger Syndrome: A Pilot Study
ERIC Educational Resources Information Center
Baron-Cohen, Simon; Ring, Howard; Chitnis, Xavier; Wheelwright, Sally; Gregory, Lloyd, Williams, Steve; Brammer, Mick; Bullmore, Ed
2006-01-01
Background: People with autism or Asperger Syndrome (AS) show altered patterns of brain activity during visual search and emotion recognition tasks. Autism and AS are genetic conditions and parents may show the "broader autism phenotype." Aims: (1) To test if parents of children with AS show atypical brain activity during a visual search…
Vigilance, visual search and attention in an agricultural task.
Hartley, L R; Arnold, P K; Kobryn, H; Macleod, C
1989-03-01
In a fragile agricultural environment, such as Western Australia (WA), introduced exotic plant species present a serious environmental and economic threat. Skeleton weed, centaurea juncea, a Mediterranean daisy, was accidentally introduced into WA in 1963. It competes with cash crops such as wheat. When observed in the fields, farms are quarantined and mechanised teams search for the infestations in order to destroy them. Since the search process requires attention, visual search and vigilance, the present investigators conducted a number of controlled field trials to identify the importance of these factors in detection of the weed. The paper describes the basic hit rate, vigilance decrement, effect of search party size, effect of target size, and some data on the effect of solar illumination of the target. Several recommendations have been made and incorporated in the search programme and some laboratory studies undertaken to answer questions arising.
Al-Abood, Saleh A; Bennett, Simon J; Hernandez, Francisco Moreno; Ashford, Derek; Davids, Keith
2002-03-01
We assessed the effects on basketball free throw performance of two types of verbal directions with an external attentional focus. Novices (n = 16) were pre-tested on free throw performance and assigned to two groups of similar ability (n = 8 in each). Both groups received verbal instructions with an external focus on either movement dynamics (movement form) or movement effects (e.g. ball trajectory relative to basket). The participants also observed a skilled model performing the task on either a small or large screen monitor, to ascertain the effects of visual presentation mode on task performance. After observation of six videotaped trials, all participants were given a post-test. Visual search patterns were monitored during observation and cross-referenced with performance on the pre- and post-test. Group effects were noted for verbal instructions and image size on visual search strategies and free throw performance. The 'movement effects' group saw a significant improvement in outcome scores between the pre-test and post-test. These results supported evidence that this group spent more viewing time on information outside the body than the 'movement dynamics' group. Image size affected both groups equally with more fixations of shorter duration when viewing the small screen. The results support the benefits of instructions when observing a model with an external focus on movement effects, not dynamics.
User-assisted visual search and tracking across distributed multi-camera networks
NASA Astrophysics Data System (ADS)
Raja, Yogesh; Gong, Shaogang; Xiang, Tao
2011-11-01
Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.
NASA Technical Reports Server (NTRS)
Phillips, Rachel; Madhavan, Poornima
2010-01-01
The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.
Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo
2006-01-01
In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.
1996-04-01
This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.
Asking better questions: How presentation formats influence information search.
Wu, Charley M; Meder, Björn; Filimon, Flavia; Nelson, Jonathan D
2017-08-01
While the influence of presentation formats have been widely studied in Bayesian reasoning tasks, we present the first systematic investigation of how presentation formats influence information search decisions. Four experiments were conducted across different probabilistic environments, where subjects (N = 2,858) chose between 2 possible search queries, each with binary probabilistic outcomes, with the goal of maximizing classification accuracy. We studied 14 different numerical and visual formats for presenting information about the search environment, constructed across 6 design features that have been prominently related to improvements in Bayesian reasoning accuracy (natural frequencies, posteriors, complement, spatial extent, countability, and part-to-whole information). The posterior variants of the icon array and bar graph formats led to the highest proportion of correct responses, and were substantially better than the standard probability format. Results suggest that presenting information in terms of posterior probabilities and visualizing natural frequencies using spatial extent (a perceptual feature) were especially helpful in guiding search decisions, although environments with a mixture of probabilistic and certain outcomes were challenging across all formats. Subjects who made more accurate probability judgments did not perform better on the search task, suggesting that simple decision heuristics may be used to make search decisions without explicitly applying Bayesian inference to compute probabilities. We propose a new take-the-difference (TTD) heuristic that identifies the accuracy-maximizing query without explicit computation of posterior probabilities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Enhancing cognition with video games: a multiple game training study.
Oei, Adam C; Patterson, Michael D
2013-01-01
Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.
Persistence in eye movement during visual search
NASA Astrophysics Data System (ADS)
Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.
2016-02-01
As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.
Looren de Jong, H; Kok, A; Woestenburg, J C; Logman, C J; Van Rooy, J C
1988-06-01
The present investigation explores the way young and elderly subjects use regularities in target location in a visual display to guide search for targets. Although both young and old subjects show efficient use of search strategies, slight but reliable differences in reaction times suggest decreased ability in the elderly to use complex cues. Event-related potentials were very different for the young and the old. In the young, P3 amplitudes were larger on trials where the rule that governed the location of the target became evident; this was interpreted as an effect of memory updating. Enhanced positive Slow Wave amplitude indicated uncertainty in random search conditions. Elderly subjects' P3 and SW, however, seemed unrelated to behavioral performance, and they showed a large negative Slow Wave at central and parietal sites to randomly located targets. The latter finding was tentatively interpreted as a sign of increased effort in the elderly to allocate attention in visual space. This pattern of behavioral and ERP results suggests that age-related differences in search tasks can be understood in terms of changes in the strategy of allocating visual attention.
Task modulates functional connectivity networks in free viewing behavior.
Seidkhani, Hossein; Nikolaev, Andrey R; Meghanathan, Radha Nila; Pezeshk, Hamid; Masoudi-Nejad, Ali; van Leeuwen, Cees
2017-10-01
In free visual exploration, eye-movement is immediately followed by dynamic reconfiguration of brain functional connectivity. We studied the task-dependency of this process in a combined visual search-change detection experiment. Participants viewed two (nearly) same displays in succession. First time they had to find and remember multiple targets among distractors, so the ongoing task involved memory encoding. Second time they had to determine if a target had changed in orientation, so the ongoing task involved memory retrieval. From multichannel EEG recorded during 200 ms intervals time-locked to fixation onsets, we estimated the functional connectivity using a weighted phase lag index at the frequencies of theta, alpha, and beta bands, and derived global and local measures of the functional connectivity graphs. We found differences between both memory task conditions for several network measures, such as mean path length, radius, diameter, closeness and eccentricity, mainly in the alpha band. Both the local and the global measures indicated that encoding involved a more segregated mode of operation than retrieval. These differences arose immediately after fixation onset and persisted for the entire duration of the lambda complex, an evoked potential commonly associated with early visual perception. We concluded that encoding and retrieval differentially shape network configurations involved in early visual perception, affecting the way the visual input is processed at each fixation. These findings demonstrate that task requirements dynamically control the functional connectivity networks involved in early visual perception. Copyright © 2017 Elsevier Inc. All rights reserved.
Horn, R R; Williams, A M; Scott, M A; Hodges, N J
2005-07-01
The authors examined the observational learning of 24 participants whom they constrained to use the model by removing intrinsic visual knowledge of results (KR). Matched participants assigned to video (VID), point-light (PL), and no-model (CON) groups performed a soccer-chipping task in which vision was occluded at ball contact. Pre- and posttests were interspersed with alternating periods of demonstration and acquisition. The authors assessed delayed retention 2-3 days later. In support of the visual perception perspective, the participants who observed the models showed immediate and enduring changes to more closely imitate the model's relative motion. While observing the demonstration, the PL group participants were more selective in their visual search than were the VID group participants but did not perform more accurately or learn more.
An active visual search interface for Medline.
Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Wilson, Justin; Athey, Brian; Watson, Stanley J; Meng, Fan
2007-01-01
Searching the Medline database is almost a daily necessity for many biomedical researchers. However, available Medline search solutions are mainly designed for the quick retrieval of a small set of most relevant documents. Because of this search model, they are not suitable for the large-scale exploration of literature and the underlying biomedical conceptual relationships, which are common tasks in the age of high throughput experimental data analysis and cross-discipline research. We try to develop a new Medline exploration approach by incorporating interactive visualization together with powerful grouping, summary, sorting and active external content retrieval functions. Our solution, PubViz, is based on the FLEX platform designed for interactive web applications and its prototype is publicly available at: http://brainarray.mbni.med.umich.edu/Brainarray/DataMining/PubViz.
Temporal and peripheral extraction of contextual cues from scenes during visual search.
Koehler, Kathryn; Eckstein, Miguel P
2017-02-01
Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.
Use of an augmented-vision device for visual search by patients with tunnel vision
Luo, Gang; Peli, Eli
2006-01-01
Purpose To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Methods Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VF) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF: 8º to 11º wide) carried out the search over a 90º×74º area, and nine subjects (VF: 7º to 16º wide) over a 66º×52º area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Results Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in both the larger and smaller area search. When using the device, a significant reduction in search time (28%~74%) was demonstrated by all 3 subjects in the larger area search and by subjects with VF wider than 10º in the smaller area search (average 22%). Directness and the gaze speed accounted for 90% of the variability of search time. Conclusions While performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. As improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks. PMID:16936136
Predicting Airport Screening Officers' Visual Search Competency With a Rapid Assessment.
Mitroff, Stephen R; Ericson, Justin M; Sharpe, Benjamin
2018-03-01
Objective The study's objective was to assess a new personnel selection and assessment tool for aviation security screeners. A mobile app was modified to create a tool, and the question was whether it could predict professional screeners' on-job performance. Background A variety of professions (airport security, radiology, the military, etc.) rely on visual search performance-being able to detect targets. Given the importance of such professions, it is necessary to maximize performance, and one means to do so is to select individuals who excel at visual search. A critical question is whether it is possible to predict search competency within a professional search environment. Method Professional searchers from the USA Transportation Security Administration (TSA) completed a rapid assessment on a tablet-based X-ray simulator (XRAY Screener, derived from the mobile technology app Airport Scanner; Kedlin Company). The assessment contained 72 trials that were simulated X-ray images of bags. Participants searched for prohibited items and tapped on them with their finger. Results Performance on the assessment significantly related to on-job performance measures for the TSA officers such that those who were better XRAY Screener performers were both more accurate and faster at the actual airport checkpoint. Conclusion XRAY Screener successfully predicted on-job performance for professional aviation security officers. While questions remain about the underlying cognitive mechanisms, this quick assessment was found to significantly predict on-job success for a task that relies on visual search performance. Application It may be possible to quickly assess an individual's visual search competency, which could help organizations select new hires and assess their current workforce.
Reward history but not search history explains value-driven attentional capture.
Marchner, Janina R; Preuschhof, Claudia
2018-04-19
In past years, an extensive amount of research has focused on how past experiences guide future attention. Humans automatically attend to stimuli previously associated with reward and stimuli that have been experienced during visual search, even when it is disadvantageous in present situations. Recently, the relationship between "reward history" and "search history" has been discussed critically. We review results from research on value-driven attentional capture (VDAC) with a focus on these two experience-based attentional selection processes and their distinction. To clarify inconsistencies, we examined VDAC within a design that allows a direct comparison with other mechanisms of attentional selection. Eighty-four healthy adults were trained to incidentally associate colors with reward (10 cents, 2 cents) or with no reward. In a subsequent visual search task, distraction by reward-associated and unrewarded stimuli was contrasted. In the training phase, reward signals facilitated performance. When these value-signaling stimuli appeared as distractors in the test phase, they continuously shaped attentional selection, despite their task irrelevance. Our findings clearly cannot be attributed to a history of target search. We conclude that once an association is established, value signals guide attention automatically in new situations, which can be beneficial or not, depending on the congruency with current goals.
Mapping the Color Space of Saccadic Selectivity in Visual Search
ERIC Educational Resources Information Center
Xu, Yun; Higgins, Emily C.; Xiao, Mei; Pomplun, Marc
2007-01-01
Color coding is used to guide attention in computer displays for such critical tasks as baggage screening or air traffic control. It has been shown that a display object attracts more attention if its color is more similar to the color for which one is searching. However, what does "similar" precisely mean? Can we predict the amount of attention…
The scope and control of attention as separate aspects of working memory.
Shipstead, Zach; Redick, Thomas S; Hicks, Kenny L; Engle, Randall W
2012-01-01
The present study examines two varieties of working memory (WM) capacity task: visual arrays (i.e., a measure of the amount of information that can be maintained in working memory) and complex span (i.e., a task that taps WM-related attentional control). Using previously collected data sets we employ confirmatory factor analysis to demonstrate that visual arrays and complex span tasks load on separate, but correlated, factors. A subsequent series of structural equation models and regression analyses demonstrate that these factors contribute both common and unique variance to the prediction of general fluid intelligence (Gf). However, while visual arrays does contribute uniquely to higher cognition, its overall correlation to Gf is largely mediated by variance associated with the complex span factor. Thus we argue that visual arrays performance is not strictly driven by a limited-capacity storage system (e.g., the focus of attention; Cowan, 2001), but may also rely on control processes such as selective attention and controlled memory search.
Looking for ideas: Eye behavior during goal-directed internally focused cognition☆
Walcher, Sonja; Körner, Christof; Benedek, Mathias
2017-01-01
Humans have a highly developed visual system, yet we spend a high proportion of our time awake ignoring the visual world and attending to our own thoughts. The present study examined eye movement characteristics of goal-directed internally focused cognition. Deliberate internally focused cognition was induced by an idea generation task. A letter-by-letter reading task served as external task. Idea generation (vs. reading) was associated with more and longer blinks and fewer microsaccades indicating an attenuation of visual input. Idea generation was further associated with more and shorter fixations, more saccades and saccades with higher amplitudes as well as heightened stimulus-independent variation of eye vergence. The latter results suggest a coupling of eye behavior to internally generated information and associated cognitive processes, i.e. searching for ideas. Our results support eye behavior patterns as indicators of goal-directed internally focused cognition through mechanisms of attenuation of visual input and coupling of eye behavior to internally generated information. PMID:28689088
Interactive Tools for Measuring Visual Scanning Performance and Reaction Time.
Brooks, Johnell; Seeanner, Julia; Hennessy, Sarah; Manganelli, Joseph; Crisler, Matthew; Rosopa, Patrick; Jenkins, Casey; Anderson, Michael; Drouin, Nathalie; Belle, Leah; Truesdail, Constance; Tanner, Stephanie
Occupational therapists are constantly searching for engaging, high-technology interactive tasks that provide immediate feedback to evaluate and train clients with visual scanning deficits. This study examined the relationship between two tools: the VISION COACH™ interactive light board and the Functional Object Detection © (FOD) Advanced driving simulator scenario. Fifty-four healthy drivers, ages 21-66 yr, were divided into three age groups. Participants performed braking response and visual target (E) detection tasks of the FOD Advanced driving scenario, followed by two sets of three trials using the VISION COACH Full Field 60 task. Results showed no significant effect of age on FOD Advanced performance but a significant effect of age on VISION COACH performance. Correlations showed that participants' performance on both braking and E detection tasks were significantly positively correlated with performance on the VISION COACH (.37 < r < .40, p < .01). These tools provide new options for therapists. Copyright © 2017 by the American Occupational Therapy Association, Inc.
Exploring biased attention towards body-related stimuli and its relationship with body awareness.
Salvato, Gerardo; De Maio, Gabriele; Bottini, Gabriella
2017-12-08
Stimuli of great social relevance exogenously capture attention. Here we explored the impact of body-related stimuli on endogenous attention. Additionally, we investigate the influence of internal states on biased attention towards this class of stimuli. Participants were presented with a body, face, or chair cue to hold in memory (Memory task) or to merely attend (Priming task) and, subsequently, they were asked to find a circle in an unrelated visual search task. In the valid condition, the circle was flanked by the cue. In the invalid condition, the pre-cued picture re-appeared flanking the distracter. In the neutral condition, the cue item did not re-appear in the search display. We found that although bodies and faces benefited from a general faster visual processing compared to chairs, holding them in memory did not produce any additional advantage on attention compared to when they are merely attended. Furthermore, face cues generated larger orienting effect compared to body and chairs cues in both Memory and Priming task. Importantly, results showed that individual sensitivity to internal bodily responses predicted the magnitude of the memory-based orienting of attention to bodies, shedding new light on the relationship between body awareness and visuo-spatial attention.
Giesbrecht, Barry; Sy, Jocelyn L.; Guerin, Scott A.
2012-01-01
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment. PMID:23099047
Visual search and emotion: how children with autism spectrum disorders scan emotional scenes.
Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J; Casagrande, Maria
2014-11-01
This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either emotion-laden (positive or negative) or neutral real-world pictures. The results showed that emotion-laden pictures equally interfered with performance of both ASD and TD children, slowing down reaction times compared with neutral pictures. Children with ASD were faster than TD children, particularly in detecting changes in MI objects, the most difficult condition. However, their performance was less accurate than performance of TD children just when the pictures were negative. These findings suggest that children with ASD have better visual search abilities than TD children only when the search is particularly difficult and requires strong serial search strategies. The emotional-social impairment that is usually considered as a typical feature of ASD seems to be limited to processing of negative emotional information.
Sobel, Kenith V; Puri, Amrita M; Faulkenberry, Thomas J; Dague, Taylor D
2017-03-01
The size congruity effect refers to the interaction between numerical magnitude and physical digit size in a symbolic comparison task. Though this effect is well established in the typical 2-item scenario, the mechanisms at the root of the interference remain unclear. Two competing explanations have emerged in the literature: an early interaction model and a late interaction model. In the present study, we used visual conjunction search to test competing predictions from these 2 models. Participants searched for targets that were defined by a conjunction of physical and numerical size. Some distractors shared the target's physical size, and the remaining distractors shared the target's numerical size. We held the total number of search items fixed and manipulated the ratio of the 2 distractor set sizes. The results from 3 experiments converge on the conclusion that numerical magnitude is not a guiding feature for visual search, and that physical and numerical magnitude are processed independently, which supports a late interaction model of the size congruity effect. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Extrafoveal preview benefit during free-viewing visual search in the monkey
Krishna, B. Suresh; Ipata, Anna E.; Bisley, James W.; Gottlieb, Jacqueline; Goldberg, Michael E.
2014-01-01
Abstract Previous studies have shown that subjects require less time to process a stimulus at the fovea after a saccade if they have viewed the same stimulus in the periphery immediately prior to the saccade. This extrafoveal preview benefit indicates that information about the visual form of an extrafoveally viewed stimulus can be transferred across a saccade. Here, we extend these findings by demonstrating and characterizing a similar extrafoveal preview benefit in monkeys during a free-viewing visual search task. We trained two monkeys to report the orientation of a target among distractors by releasing one of two bars with their hand; monkeys were free to move their eyes during the task. Both monkeys took less time to indicate the orientation of the target after foveating it, when the target lay closer to the fovea during the previous fixation. An extrafoveal preview benefit emerged even if there was more than one intervening saccade between the preview and the target fixation, indicating that information about target identity could be transferred across more than one saccade and could be obtained even if the search target was not the goal of the next saccade. An extrafoveal preview benefit was also found for distractor stimuli. These results aid future physiological investigations of the extrafoveal preview benefit. PMID:24403392
Bourgeois, Alexia; Neveu, Rémi; Vuilleumier, Patrik
2016-01-01
In order to behave adaptively, attention can be directed in space either voluntarily (i.e., endogenously) according to strategic goals, or involuntarily (i.e., exogenously) through reflexive capture by salient or novel events. The emotional or motivational value of stimuli can also strongly influence attentional orienting. However, little is known about how reward-related effects compete or interact with endogenous and exogenous attention mechanisms, particularly outside of awareness. Here we developed a visual search paradigm to study subliminal value-based attentional orienting. We systematically manipulated goal-directed or stimulus-driven attentional orienting and examined whether an irrelevant, but previously rewarded stimulus could compete with both types of spatial attention during search. Critically, reward was learned without conscious awareness in a preceding phase where one among several visual symbols was consistently paired with a subliminal monetary reinforcement cue. Our results demonstrated that symbols previously associated with a monetary reward received higher attentional priority in the subsequent visual search task, even though these stimuli and reward were no longer task-relevant, and despite reward being unconsciously acquired. Thus, motivational processes operating independent of conscious awareness may provide powerful influences on mechanisms of attentional selection, which could mitigate both stimulus-driven and goal-directed shifts of attention. PMID:27483371
Attention Dysfunction Subtypes of Developmental Dyslexia
Lewandowska, Monika; Milner, Rafał; Ganc, Małgorzata; Włodarczyk, Elżbieta; Skarżyński, Henryk
2014-01-01
Background Previous studies indicate that many different aspects of attention are impaired in children diagnosed with developmental dyslexia (DD). The objective of the present study was to identify cognitive profiles of DD on the basis of attentional test performance. Material/Methods 78 children with DD (30 girls, 48 boys, mean age of 12 years ±8 months) and 32 age- and sex-matched non-dyslexic children (14 girls, 18 boys) were examined using a battery of standardized tests of reading, phonological and attentional processes (alertness, covert shift of attention, divided attention, inhibition, flexibility, vigilance, and visual search). Cluster analysis was used to identify subtypes of DD. Results Dyslexic children showed deficits in alertness, covert shift of attention, divided attention, flexibility, and visual search. Three different subtypes of DD were identified, each characterized by poorer performance on the reading, phonological awareness, and visual search tasks. Additionally, children in cluster no. 1 displayed deficits in flexibility and divided attention. In contrast to non-dyslexic children, cluster no. 2 performed poorer in tasks involving alertness, covert shift of attention, divided attention, and vigilance. Cluster no. 3 showed impaired covert shift of attention. Conclusions These results indicate different patterns of attentional impairments in dyslexic children. Remediation programs should address the individual child’s deficit profile. PMID:25387479
Pankok, Carl; Kaber, David B
2018-05-01
Existing measures of display clutter in the literature generally exhibit weak correlations with task performance, which limits their utility in safety-critical domains. A literature review led to formulation of an integrated display data- and user knowledge-driven measure of display clutter. A driving simulation experiment was conducted in which participants were asked to search 'high' and 'low' clutter displays for navigation information. Data-driven measures and subjective perceptions of clutter were collected along with patterns of visual attention allocation and driving performance responses during time periods in which participants searched the navigation display for information. The new integrated measure was more strongly correlated with driving performance than other, previously developed measures of clutter, particularly in the case of low-clutter displays. Integrating display data and user knowledge factors with patterns of visual attention allocation shows promise for measuring display clutter and correlation with task performance, particularly for low-clutter displays. Practitioner Summary: A novel measure of display clutter was formulated, accounting for display data content, user knowledge states and patterns of visual attention allocation. The measure was evaluated in terms of correlations with driver performance in a safety-critical driving simulation study. The measure exhibited stronger correlations with task performance than previously defined measures.
Reading and visual processing in Greek dyslexic children: an eye-movement study.
Hatzidaki, Anna; Gianneli, Maria; Petrakis, Eftichis; Makaronas, Nikolaos; Aslanides, Ioannis M
2011-02-01
We examined the impact of the effects of dyslexia on various processing and cognitive components (e.g., reading speed and accuracy) in a language with high phonological and orthographic consistency. Greek dyslexic children were compared with a chronological age-matched group on tasks that tested participants' phonological and orthographic awareness during reading and spelling, as well as their efficiency to detect a specific target-letter during a sequential visual search task. Dyslexic children showed impaired reading and spelling that was reflected in slow reading speed and error-prone performance, especially for non-words. Eye movement measures of text reading also provided supporting evidence for a reading deficit, with dyslexic participants producing more fixations and longer fixation duration as opposed to non-dyslexic participants. The results of the visual search task showed similar performance between the two groups, but when they were compared with the results of text reading, dyslexic participants were found to be able to process fewer stimuli (i.e., letters) at each fixation than non-dyslexics. Our findings further suggest that, although Greek dyslexics have the advantage of a consistent orthographic system which facilitates acquisition of reading and phonological awareness, they demonstrate more impaired access to orthographic forms than dyslexics of other transparent orthographies. Copyright © 2010 John Wiley & Sons, Ltd.
Long-term effects of cannabis on oculomotor function in humans.
Huestegge, L; Radach, R; Kunert, H J
2009-08-01
Cannabis is known to affect human cognitive and visuomotor skills directly after consumption. Some studies even point to rather long-lasting effects, especially after chronic tetrahydrocannabinol (THC) abuse. However, it is still unknown whether long-term effects on basic visual and oculomotor processing may exist. In the present study, the performance of 20 healthy long-term cannabis users without acute THC intoxication and 20 control subjects were examined in four basic visuomotor paradigms to search for specific long-term impairments. Subjects were asked to perform: 1) reflexive saccades to visual targets (prosaccades), including gap and overlap conditions, 2) voluntary antisaccades, 3) memory-guided saccades and 4) double-step saccades. Spatial and temporal parameters of the saccades were subsequently analysed. THC subjects exhibited a significant increase of latency in the prosaccade and antisaccade tasks, as well as prolonged saccade amplitudes in the antisaccade and memory-guided task, compared with the control subjects. The results point to substantial and specific long-term deficits in basic temporal processing of saccades and impaired visuo-spatial working memory. We suggest that these impairments are a major contributor to degraded performance of chronic users in a vital everyday task like visual search, and they might potentially also affect spatial navigation and reading.
Naturalistic distraction and driving safety in older drivers.
Aksan, Nazan; Dawson, Jeffrey D; Emerson, Jamie L; Yu, Lixi; Uc, Ergun Y; Anderson, Steven W; Rizzo, Matthew
2013-08-01
In this study, we aimed to quantify and compare performance of middle-aged and older drivers during a naturalistic distraction paradigm (visual search for roadside targets) and to predict older drivers performance given functioning in visual, motor, and cognitive domains. Distracted driving can imperil healthy adults and may disproportionally affect the safety of older drivers with visual, motor, and cognitive decline. A total of 203 drivers, 120 healthy older (61 men and 59 women, ages 65 years and older) and 83 middle-aged drivers (38 men and 45 women, ages 40 to 64 years), participated in an on-road test in an instrumented vehicle. Outcome measures included performance in roadside target identification (traffic signs and restaurants) and concurrent driver safety. Differences in visual, motor, and cognitive functioning served as predictors. Older drivers identified fewer landmarks and drove slower but committed more safety errors than did middle-aged drivers. Greater familiarity with local roads benefited performance of middle-aged but not older drivers.Visual cognition predicted both traffic sign identification and safety errors, and executive function predicted traffic sign identification over and above vision. Older adults are susceptible to driving safety errors while distracted by common secondary visual search tasks that are inherent to driving. The findings underscore that age-related cognitive decline affects older drivers' management of driving tasks at multiple levels and can help inform the design of on-road tests and interventions for older drivers.
Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces
Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.
2015-01-01
The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122
To hear or not to hear: Voice processing under visual load.
Zäske, Romi; Perlich, Marie-Christin; Schweinberger, Stefan R
2016-07-01
Adaptation to female voices causes subsequent voices to be perceived as more male, and vice versa. This contrastive aftereffect disappears under spatial inattention to adaptors, suggesting that voices are not encoded automatically. According to Lavie, Hirst, de Fockert, and Viding (2004), the processing of task-irrelevant stimuli during selective attention depends on perceptual resources and working memory. Possibly due to their social significance, faces may be an exceptional domain: That is, task-irrelevant faces can escape perceptual load effects. Here we tested voice processing, to study whether voice gender aftereffects (VGAEs) depend on low or high perceptual (Exp. 1) or working memory (Exp. 2) load in a relevant visual task. Participants adapted to irrelevant voices while either searching digit displays for a target (Exp. 1) or recognizing studied digits (Exp. 2). We found that the VGAE was unaffected by perceptual load, indicating that task-irrelevant voices, like faces, can also escape perceptual-load effects. Intriguingly, the VGAE was increased under high memory load. Therefore, visual working memory load, but not general perceptual load, determines the processing of task-irrelevant voices.
Age mediation of frontoparietal activation during visual feature search.
Madden, David J; Parks, Emily L; Davis, Simon W; Diaz, Michele T; Potter, Guy G; Chou, Ying-hui; Chen, Nan-kuei; Cabeza, Roberto
2014-11-15
Activation of frontal and parietal brain regions is associated with attentional control during visual search. We used fMRI to characterize age-related differences in frontoparietal activation in a highly efficient feature search task, detection of a shape singleton. On half of the trials, a salient distractor (a color singleton) was present in the display. The hypothesis was that frontoparietal activation mediated the relation between age and attentional capture by the salient distractor. Participants were healthy, community-dwelling individuals, 21 younger adults (19-29 years of age) and 21 older adults (60-87 years of age). Top-down attention, in the form of target predictability, was associated with an improvement in search performance that was comparable for younger and older adults. The increase in search reaction time (RT) associated with the salient distractor (attentional capture), standardized to correct for generalized age-related slowing, was greater for older adults than for younger adults. On trials with a color singleton distractor, search RT increased as a function of increasing activation in frontal regions, for both age groups combined, suggesting increased task difficulty. Mediational analyses disconfirmed the hypothesized model, in which frontal activation mediated the age-related increase in attentional capture, but supported an alternative model in which age was a mediator of the relation between frontal activation and capture. Copyright © 2014 Elsevier Inc. All rights reserved.
The psychological four-color mapping problem.
Francis, Gregory; Bias, Keri; Shive, Joshua
2010-06-01
Mathematicians have proven that four colors are sufficient to color 2-D maps so that no neighboring regions share the same color. Here we consider the psychological 4-color problem: Identifying which 4 colors should be used to make a map easy to use. We build a model of visual search for this design task and demonstrate how to apply it to the task of identifying the optimal colors for a map. We parameterized the model with a set of 7 colors using a visual search experiment in which human participants found a target region on a small map. We then used the model to predict search times for new maps and identified the color assignments that minimize or maximize average search time. The differences between these maps were predicted to be substantial. The model was then tested with a larger set of 31 colors on a map of English counties under conditions in which participants might memorize some aspects of the map. Empirical tests of the model showed that an optimally best colored version of this map is searched 15% faster than the correspondingly worst colored map. Thus, the color assignment seems to affect search times in a way predicted by the model, and this effect persists even when participants might use other sources of knowledge about target location. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Action Intentions Modulate Allocation of Visual Attention: Electrophysiological Evidence
Wykowska, Agnieszka; Schubö, Anna
2012-01-01
In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing – an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms. PMID:23060841
Perceptual learning effect on decision and confidence thresholds.
Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano
2016-10-01
Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.
Foulsham, Tom; Alan, Rana; Kingstone, Alan
2011-10-01
Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.
Sex-related differences in attention and memory.
Solianik, Rima; Brazaitis, Marius; Skurvydas, Albertas
2016-01-01
The sex differences and similarities in cognitive abilities is a continuing topic of major interest. Besides, the influences of trends over time and possible effects of sex steroid and assessment time on cognition have expanded the necessity to re-evaluate differences between men and women. Therefore, the aim of this study was to compare cognitive performance between men and women in a strongly controlled experiment. In total, 28 men and 25 women were investigated. Variables of body temperature and heart rate were assessed. A cognitive test battery was used to assess attention (visual search, unpredictable task switching as well as complex visual search and predictable task switching tests) and memory (forced visual memory, forward digit span and free recall test). The differences in heart rate and body temperatures between men and women were not significant. There were no differences in the mean values of attention and memory abilities between men and women. Coefficients of variation of unpredictable task switching response and forward digit span were lower (P<0.05) in men. Coefficients of variation positively correlated (P<0.05) with attention task incorrect response and negatively correlated (P<0.05) with correct answers in the memory task. Current study showed no sex differences in the mean values of cognition, whereas higher intra-individual variability of short-term memory and attention switching was identified in women, indicating that their performance was lower on these cognitive abilities. Copyright © 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function
Jiménez Barbosa, Ingrid Astrid
2015-01-01
Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026
Inhibition of return in the covert deployment of attention: evidence from human electrophysiology.
McDonald, John J; Hickey, Clayton; Green, Jessica J; Whitman, Jennifer C
2009-04-01
People are slow to react to objects that appear at recently attended locations. This delay-known as inhibition of return (IOR)-is believed to aid search of the visual environment by discouraging inspection of recently inspected objects. However, after two decades of research, there is no evidence that IOR reflects an inhibition in the covert deployment of attention. Here, observers participated in a modified visual-search task that enabled us to measure IOR and an ERP component called the posterior contralateral N2 (N2pc) that reflects the covert deployment of attention. The N2pc was smaller when a target appeared at a recently attended location than when it appeared at a recently unattended location. This reduction was due to modulation of neural processing in the visual cortex and the right parietal lobe. Importantly, there was no evidence for a delay in the N2pc. We conclude that in our task, the inhibitory processes underlying IOR reduce the probability of shifting attention to recently attended locations but do not delay the covert deployment of attention itself.
[Eye movement study in multiple object search process].
Xu, Zhaofang; Liu, Zhongqi; Wang, Xingwei; Zhang, Xin
2017-04-01
The aim of this study is to investigate the search time regulation of objectives and eye movement behavior characteristics in the multi-objective visual search. The experimental task was accomplished with computer programming and presented characters on a 24 inch computer display. The subjects were asked to search three targets among the characters. Three target characters in the same group were of high similarity degree while those in different groups of target characters and distraction characters were in different similarity degrees. We recorded the search time and eye movement data through the whole experiment. It could be seen from the eye movement data that the quantity of fixation points was large when the target characters and distraction characters were similar. There were three kinds of visual search patterns for the subjects including parallel search, serial search, and parallel-serial search. In addition, the last pattern had the best search performance among the three search patterns, that is, the subjects who used parallel-serial search pattern spent shorter time finding the target. The order that the targets presented were able to affect the search performance significantly; and the similarity degree between target characters and distraction characters could also affect the search performance.
Contrasting vertical and horizontal representations of affect in emotional visual search.
Damjanovic, Ljubica; Santiago, Julio
2016-02-01
Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the 'up = good' metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the 'up = good' metaphor is more salient and readily activated than the 'right = good' metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings.
How Attention Affects Spatial Resolution
Carrasco, Marisa; Barbot, Antoine
2015-01-01
We summarize and discuss a series of psychophysical studies on the effects of spatial covert attention on spatial resolution, our ability to discriminate fine patterns. Heightened resolution is beneficial in most, but not all, visual tasks. We show how endogenous attention (voluntary, goal driven) and exogenous attention (involuntary, stimulus driven) affect performance on a variety of tasks mediated by spatial resolution, such as visual search, crowding, acuity, and texture segmentation. Exogenous attention is an automatic mechanism that increases resolution regardless of whether it helps or hinders performance. In contrast, endogenous attention flexibly adjusts resolution to optimize performance according to task demands. We illustrate how psychophysical studies can reveal the underlying mechanisms of these effects and allow us to draw linking hypotheses with known neurophysiological effects of attention. PMID:25948640
Woodman, Geoffrey F.; Luck, Steven J.
2007-01-01
In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that working memory activation produces a simple but uncontrollable bias signal leads to the prediction that items matching the contents of working memory will automatically capture attention. However, no evidence for automatic attentional capture was obtained; instead, the participants avoided attending to these items. Thus, the contents of working memory can be used in a flexible manner for facilitation or inhibition of processing. PMID:17469973