Sample records for visual search patterns

  1. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.

  2. Emotional Devaluation of Distracting Patterns and Faces: A Consequence of Attentional Inhibition during Visual Search?

    ERIC Educational Resources Information Center

    Raymond, Jane E.; Fenske, Mark J.; Westoby, Nikki

    2005-01-01

    Visual search has been studied extensively, yet little is known about how its constituent processes affect subsequent emotional evaluation of searched-for and searched-through items. In 3 experiments, the authors asked observers to locate a colored pattern or tinted face in an array of other patterns or faces. Shortly thereafter, either the target…

  3. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  4. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4.

    PubMed

    Braun, J

    1994-02-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.

  5. Visual search for facial expressions of emotions: a comparison of dynamic and static faces.

    PubMed

    Horstmann, Gernot; Ansorge, Ulrich

    2009-02-01

    A number of past studies have used the visual search paradigm to examine whether certain aspects of emotional faces are processed preattentively and can thus be used to guide attention. All these studies presented static depictions of facial prototypes. Emotional expressions conveyed by the movement patterns of the face have never been examined for their preattentive effect. The present study presented for the first time dynamic facial expressions in a visual search paradigm. Experiment 1 revealed efficient search for a dynamic angry face among dynamic friendly faces, but inefficient search in a control condition with static faces. Experiments 2 to 4 suggested that this pattern of results is due to a stronger movement signal in the angry than in the friendly face: No (strong) advantage of dynamic over static faces is revealed when the degree of movement is controlled. These results show that dynamic information can be efficiently utilized in visual search for facial expressions. However, these results do not generally support the hypothesis that emotion-specific movement patterns are always preattentively discriminated. (c) 2009 APA, all rights reserved

  6. Visual Search Performance in Patients with Vision Impairment: A Systematic Review.

    PubMed

    Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva

    2017-11-01

    Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.

  7. Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.

    PubMed

    Andrews, T J; Coppola, D M

    1999-08-01

    Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.

  8. Contextual cueing impairment in patients with age-related macular degeneration.

    PubMed

    Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan

    2013-09-12

    Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circlesmore » shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.« less

  10. [Eye movement study in multiple object search process].

    PubMed

    Xu, Zhaofang; Liu, Zhongqi; Wang, Xingwei; Zhang, Xin

    2017-04-01

    The aim of this study is to investigate the search time regulation of objectives and eye movement behavior characteristics in the multi-objective visual search. The experimental task was accomplished with computer programming and presented characters on a 24 inch computer display. The subjects were asked to search three targets among the characters. Three target characters in the same group were of high similarity degree while those in different groups of target characters and distraction characters were in different similarity degrees. We recorded the search time and eye movement data through the whole experiment. It could be seen from the eye movement data that the quantity of fixation points was large when the target characters and distraction characters were similar. There were three kinds of visual search patterns for the subjects including parallel search, serial search, and parallel-serial search. In addition, the last pattern had the best search performance among the three search patterns, that is, the subjects who used parallel-serial search pattern spent shorter time finding the target. The order that the targets presented were able to affect the search performance significantly; and the similarity degree between target characters and distraction characters could also affect the search performance.

  11. Age differences in visual search for compound patterns: long- versus short-range grouping.

    PubMed

    Burack, J A; Enns, J T; Iarocci, G; Randolph, B

    2000-11-01

    Visual search for compound patterns was examined in observers aged 6, 8, 10, and 22 years. The main question was whether age-related improvement in search rate (response time slope over number of items) was different for patterns defined by short- versus long-range spatial relations. Perceptual access to each type of relation was varied by using elements of same contrast (easy to access) or mixed contrast (hard to access). The results showed large improvements with age in search rate for long-range targets; search rate for short-range targets was fairly constant across age. This pattern held regardless of whether perceptual access to a target was easy or hard, supporting the hypothesis that different processes are involved in perceptual grouping at these two levels. The results also point to important links between ontogenic and microgenic change in perception (H. Werner, 1948, 1957).

  12. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  13. Modeling the role of parallel processing in visual search.

    PubMed

    Cave, K R; Wolfe, J M

    1990-04-01

    Treisman's Feature Integration Theory and Julesz's Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.

  14. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    ERIC Educational Resources Information Center

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  15. Modulation of neuronal responses during covert search for visual feature conjunctions

    PubMed Central

    Buracas, Giedrius T.; Albright, Thomas D.

    2009-01-01

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385

  16. Modulation of neuronal responses during covert search for visual feature conjunctions.

    PubMed

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  17. Evidence of different underlying processes in pattern recall and decision-making.

    PubMed

    Gorman, Adam D; Abernethy, Bruce; Farrow, Damian

    2015-01-01

    The visual search characteristics of expert and novice basketball players were recorded during pattern recall and decision-making tasks to determine whether the two tasks shared common visual-perceptual processing strategies. The order in which participants entered the pattern elements in the recall task was also analysed to further examine the nature of the visual-perceptual strategies and the relative emphasis placed upon particular pattern features. The experts demonstrated superior performance across the recall and decision-making tasks [see also Gorman, A. D., Abernethy, B., & Farrow, D. (2012). Classical pattern recall tests and the prospective nature of expert performance. The Quarterly Journal of Experimental Psychology, 65, 1151-1160; Gorman, A. D., Abernethy, B., & Farrow, D. (2013a). Is the relationship between pattern recall and decision-making influenced by anticipatory recall? The Quarterly Journal of Experimental Psychology, 66, 2219-2236)] but a number of significant differences in the visual search data highlighted disparities in the processing strategies, suggesting that recall skill may utilize different underlying visual-perceptual processes than those required for accurate decision-making performance in the natural setting. Performance on the recall task was characterized by a proximal-to-distal order of entry of the pattern elements with participants tending to enter the players located closest to the ball carrier earlier than those located more distal to the ball carrier. The results provide further evidence of the underlying perceptual processes employed by experts when extracting visual information from complex and dynamic patterns.

  18. Horizontal visual search in a large field by patients with unilateral spatial neglect.

    PubMed

    Nakatani, Ken; Notoya, Masako; Sunahara, Nobuyuki; Takahashi, Shusuke; Inoue, Katsumi

    2013-06-01

    In this study, we investigated the horizontal visual search ability and pattern of horizontal visual search in a large space performed by patients with unilateral spatial neglect (USN). Subjects included nine patients with right hemisphere damage caused by cerebrovascular disease showing left USN, nine patients with right hemisphere damage but no USN, and six healthy individuals with no history of brain damage who were age-matched to the groups with brain right hemisphere damage. The number of visual search tasks accomplished was recorded in the first experiment. Neck rotation angle was continuously measured during the task and quantitative data of the measurements were collected. There was a strong correlation between the number of visual search tasks accomplished and the total Behavioral Inattention Test Conventional Subtest (BITC) score in subjects with right hemisphere damage. In both USN and control groups, the head position during the visual search task showed a balanced bell-shaped distribution from the central point on the field to the left and right sides. Our results indicate that compensatory strategies, including cervical rotation, may improve visual search capability and achieve balance on the neglected side. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. A novel computational model to probe visual search deficits during motor performance

    PubMed Central

    Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy

    2016-01-01

    Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596

  20. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  1. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  2. Gaze Patterns of Gross Anatomy Students Change with Classroom Learning

    ERIC Educational Resources Information Center

    Zumwalt, Ann C.; Iyer, Arjun; Ghebremichael, Abenet; Frustace, Bruno S.; Flannery, Sean

    2015-01-01

    Numerous studies have documented that experts exhibit more efficient gaze patterns than those of less experienced individuals. In visual search tasks, experts use fewer, longer fixations to fixate for relatively longer on salient regions of the visual field while less experienced observers spend more time examining nonsalient regions. This study…

  3. Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency

    PubMed Central

    Sripati, Arun P.; Olson, Carl R.

    2010-01-01

    Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

  4. Visual search for object categories is predicted by the representational architecture of high-level visual cortex

    PubMed Central

    Alvarez, George A.; Nakayama, Ken; Konkle, Talia

    2016-01-01

    Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing. PMID:27832600

  5. Speakers of Different Languages Process the Visual World Differently

    PubMed Central

    Chabal, Sarah; Marian, Viorica

    2015-01-01

    Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct linguistic input, showing that language is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. PMID:26030171

  6. C-State: an interactive web app for simultaneous multi-gene visualization and comparative epigenetic pattern search.

    PubMed

    Sowpati, Divya Tej; Srivastava, Surabhi; Dhawan, Jyotsna; Mishra, Rakesh K

    2017-09-13

    Comparative epigenomic analysis across multiple genes presents a bottleneck for bench biologists working with NGS data. Despite the development of standardized peak analysis algorithms, the identification of novel epigenetic patterns and their visualization across gene subsets remains a challenge. We developed a fast and interactive web app, C-State (Chromatin-State), to query and plot chromatin landscapes across multiple loci and cell types. C-State has an interactive, JavaScript-based graphical user interface and runs locally in modern web browsers that are pre-installed on all computers, thus eliminating the need for cumbersome data transfer, pre-processing and prior programming knowledge. C-State is unique in its ability to extract and analyze multi-gene epigenetic information. It allows for powerful GUI-based pattern searching and visualization. We include a case study to demonstrate its potential for identifying user-defined epigenetic trends in context of gene expression profiles.

  7. Use of Cognitive and Metacognitive Strategies in Online Search: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Zhou, Mingming; Ren, Jing

    2016-01-01

    This study used eye-tracking technology to track students' eye movements while searching information on the web. The research question guiding this study was "Do students with different search performance levels have different visual attention distributions while searching information online? If yes, what are the patterns for high and low…

  8. Configural learning in contextual cuing of visual search.

    PubMed

    Beesley, Tom; Vadillo, Miguel A; Pearson, Daniel; Shanks, David R

    2016-08-01

    Two experiments were conducted to explore the role of configural representations in contextual cuing of visual search. Repeating patterns of distractors (contexts) were trained incidentally as predictive of the target location. Training participants with repeating contexts of consistent configurations led to stronger contextual cuing than when participants were trained with contexts of inconsistent configurations. Computational simulations with an elemental associative learning model of contextual cuing demonstrated that purely elemental representations could not account for the results. However, a configural model of associative learning was able to simulate the ordinal pattern of data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Orthographic versus semantic matching in visual search for words within lists.

    PubMed

    Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas

    2012-03-01

    An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.

  10. Effects of Peripheral Visual Field Loss on Eye Movements During Visual Search

    PubMed Central

    Wiecek, Emily; Pasquale, Louis R.; Fiser, Jozsef; Dakin, Steven; Bex, Peter J.

    2012-01-01

    Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search. PMID:23162511

  11. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  12. Hybrid foraging search: Searching for multiple instances of multiple types of target.

    PubMed

    Wolfe, Jeremy M; Aizenman, Avigael M; Boettcher, Sage E P; Cain, Matthew S

    2016-02-01

    This paper introduces the "hybrid foraging" paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8-64 target objects in memory. They viewed displays of 60-105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25-33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Hybrid foraging search: Searching for multiple instances of multiple types of target

    PubMed Central

    Wolfe, Jeremy M.; Aizenman, Avigael M.; Boettcher, Sage E.P.; Cain, Matthew S.

    2016-01-01

    This paper introduces the “hybrid foraging” paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8–64 targets objects in memory. They viewed displays of 60–105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25–33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. PMID:26731644

  14. Speakers of different languages process the visual world differently.

    PubMed

    Chabal, Sarah; Marian, Viorica

    2015-06-01

    Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (c) 2015 APA, all rights reserved).

  15. Does the Australian desert ant Melophorus bagoti approximate a Lévy search by an intrinsic bi-modal walk?

    PubMed

    Reynolds, Andy M; Schultheiss, Patrick; Cheng, Ken

    2014-01-07

    We suggest that the Australian desert ant Melophorus bagoti approximates a Lévy search pattern by using an intrinsic bi-exponential walk and does so when a Lévy search pattern is advantageous. When attempting to locate its nest, M. bagoti adopt a stereotypical search pattern. These searches begin at the location where the ant expects to find the nest, and comprise loops that start and end at this location, and are directed in different azimuthal directions. Loop lengths are exponentially distributed when searches are in visually familiar surroundings and are well described by a mixture of two exponentials when searches are in unfamiliar landscapes. The latter approximates a power-law distribution, the hallmark of a Lévy search. With the aid of a simple analytically tractable theory, we show that an exponential loop-length distribution is advantageous when the distance to the nest can be estimated with some certainty and that a bi-exponential distribution is advantageous when there is considerable uncertainty regarding the nest location. The best bi-exponential search patterns are shown to be those that come closest to approximating advantageous Lévy looping searches. The bi-exponential search patterns of M. bagoti are found to approximate advantageous Lévy search patterns. Copyright © 2013. Published by Elsevier Ltd.

  16. Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration.

    PubMed

    Behrisch, Michael; Bach, Benjamin; Hund, Michael; Delz, Michael; Von Ruden, Laura; Fekete, Jean-Daniel; Schreck, Tobias

    2017-01-01

    In this work we address the problem of retrieving potentially interesting matrix views to support the exploration of networks. We introduce Matrix Diagnostics (or Magnostics), following in spirit related approaches for rating and ranking other visualization techniques, such as Scagnostics for scatter plots. Our approach ranks matrix views according to the appearance of specific visual patterns, such as blocks and lines, indicating the existence of topological motifs in the data, such as clusters, bi-graphs, or central nodes. Magnostics can be used to analyze, query, or search for visually similar matrices in large collections, or to assess the quality of matrix reordering algorithms. While many feature descriptors for image analyzes exist, there is no evidence how they perform for detecting patterns in matrices. In order to make an informed choice of feature descriptors for matrix diagnostics, we evaluate 30 feature descriptors-27 existing ones and three new descriptors that we designed specifically for MAGNOSTICS-with respect to four criteria: pattern response, pattern variability, pattern sensibility, and pattern discrimination. We conclude with an informed set of six descriptors as most appropriate for Magnostics and demonstrate their application in two scenarios; exploring a large collection of matrices and analyzing temporal networks.

  17. Drivers’ Visual Search Patterns during Overtaking Maneuvers on Freeway

    PubMed Central

    Zhang, Wenhui; Dai, Jing; Pei, Yulong; Li, Penghui; Yan, Ying; Chen, Xinqiang

    2016-01-01

    Drivers gather traffic information primarily by means of their vision. Especially during complicated maneuvers, such as overtaking, they need to perceive a variety of characteristics including the lateral and longitudinal distances with other vehicles, the speed of others vehicles, lane occupancy, and so on, to avoid crashes. The primary object of this study is to examine the appropriate visual search patterns during overtaking maneuvers on freeways. We designed a series of driving simulating experiments in which the type and speed of the leading vehicle were considered as two influential factors. One hundred and forty participants took part in the study. The participants overtook the leading vehicles just like they would usually do so, and their eye movements were collected by use of the Eye Tracker. The results show that participants’ gaze durations and saccade durations followed normal distribution patterns and that saccade angles followed a log-normal distribution pattern. It was observed that the type of leading vehicle significantly impacted the drivers’ gaze duration and gaze frequency. As the speed of a leading vehicle increased, subjects’ saccade durations became longer and saccade angles became larger. In addition, the initial and destination lanes were found to be key areas with the highest visual allocating proportion, accounting for more than 65% of total visual allocation. Subjects tended to more frequently shift their viewpoints between the initial lane and destination lane in order to search for crucial traffic information. However, they seldom directly shifted their viewpoints between the two wing mirrors. PMID:27869764

  18. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    PubMed

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.

  19. The effect of four user interface concepts on visual scan pattern similarity and information foraging in a complex decision making task.

    PubMed

    Starke, Sandra D; Baber, Chris

    2018-07-01

    User interface (UI) design can affect the quality of decision making, where decisions based on digitally presented content are commonly informed by visually sampling information through eye movements. Analysis of the resulting scan patterns - the order in which people visually attend to different regions of interest (ROIs) - gives an insight into information foraging strategies. In this study, we quantified scan pattern characteristics for participants engaging with conceptually different user interface designs. Four interfaces were modified along two dimensions relating to effort in accessing information: data presentation (either alpha-numerical data or colour blocks), and information access time (all information sources readily available or sequential revealing of information required). The aim of the study was to investigate whether a) people develop repeatable scan patterns and b) different UI concepts affect information foraging and task performance. Thirty-two participants (eight for each UI concept) were given the task to correctly classify 100 credit card transactions as normal or fraudulent based on nine transaction attributes. Attributes varied in their usefulness of predicting the correct outcome. Conventional and more recent (network analysis- and bioinformatics-based) eye tracking metrics were used to quantify visual search. Empirical findings were evaluated in context of random data and possible accuracy for theoretical decision making strategies. Results showed short repeating sequence fragments within longer scan patterns across participants and conditions, comprising a systematic and a random search component. The UI design concept showing alpha-numerical data in full view resulted in most complete data foraging, while the design concept showing colour blocks in full view resulted in the fastest task completion time. Decision accuracy was not significantly affected by UI design. Theoretical calculations showed that the difference in achievable accuracy between very complex and simple decision making strategies was small. We conclude that goal-directed search of familiar information results in repeatable scan pattern fragments (often corresponding to information sources considered particularly important), but no repeatable complete scan pattern. The underlying concept of the UI affects how visual search is performed, and a decision making strategy develops. This should be taken in consideration when designing for applied domains. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  1. Examining drivers' eye glance patterns during distracted driving: Insights from scanning randomness and glance transition matrix.

    PubMed

    Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R

    2017-12-01

    Visual attention to the driving environment is of great importance for road safety. Eye glance behavior has been used as an indicator of distracted driving. This study examined and quantified drivers' glance patterns and features during distracted driving. Data from an existing naturalistic driving study were used. Entropy rate was calculated and used to assess the randomness associated with drivers' scanning patterns. A glance-transition proportion matrix was defined to quantity visual search patterns transitioning among four main eye glance locations while driving (i.e., forward on-road, phone, mirrors and others). All measurements were calculated within a 5s time window under both cell phone and non-cell phone use conditions. Results of the glance data analyses showed different patterns between distracted and non-distracted driving, featured by a higher entropy rate value and highly biased attention transferring between forward and phone locations during distracted driving. Drivers in general had higher number of glance transitions, and their on-road glance duration was significantly shorter during distracted driving when compared to non-distracted driving. Results suggest that drivers have a higher scanning randomness/disorder level and shift their main attention from surrounding areas towards phone area when engaging in visual-manual tasks. Drivers' visual search patterns during visual-manual distraction with a high scanning randomness and a high proportion of eye glance transitions towards the location of the phone provide insight into driver distraction detection. This will help to inform the design of in-vehicle human-machine interface/systems. Copyright © 2017. Published by Elsevier Ltd.

  2. The role of extra-foveal processing in 3D imaging

    NASA Astrophysics Data System (ADS)

    Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.

    2017-03-01

    The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).

  3. Effect of marihuana and alcohol on visual search performance

    DOT National Transportation Integrated Search

    1976-10-01

    Two experiments were performed to determine the effects of alcohol and marihuana on visual scanning patterns in a simulated driving situation. In the first experiment 27 male heavy drinkers were divided into 3 groups of 9, defined by three blood alco...

  4. Digital Images and Human Vision

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    Processing of digital images destined for visual consumption raises many interesting questions regarding human visual sensitivity. This talk will survey some of these questions, including some that have been answered and some that have not. There will be an emphasis upon visual masking, and a distinction will be drawn between masking due to contrast gain control processes, and due to processes such as hypothesis testing, pattern recognition, and visual search.

  5. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  6. Neglect assessment as an application of virtual reality.

    PubMed

    Broeren, J; Samuelsson, H; Stibrant-Sunnerhagen, K; Blomstrand, C; Rydmark, M

    2007-09-01

    In this study a cancellation task in a virtual environment was applied to describe the pattern of search and the kinematics of hand movements in eight patients with right hemisphere stroke. Four of these patients had visual neglect and four had recovered clinically from initial symptoms of neglect. The performance of the patients was compared with that of a control group consisting of eight subjects with no history of neurological deficits. Patients with neglect as well as patients clinically recovered from neglect showed aberrant search performance in the virtual reality (VR) task, such as mixed search pattern, repeated target pressures and deviating hand movements. The results indicate that in patients with a right hemispheric stroke, this VR application can provide an additional tool for assessment that can identify small variations otherwise not detectable with standard paper-and-pencil tests. VR technology seems to be well suited for the assessment of visually guided manual exploration in space.

  7. The Temporal Dynamics of Visual Search: Evidence for Parallel Processing in Feature and Conjunction Searches

    PubMed Central

    McElree, Brian; Carrasco, Marisa

    2012-01-01

    Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310

  8. Parallel, exhaustive processing underlies logarithmic search functions: Visual search with cortical magnification.

    PubMed

    Wang, Zhiyuan; Lleras, Alejandro; Buetti, Simona

    2018-04-17

    Our lab recently found evidence that efficient visual search (with a fixed target) is characterized by logarithmic Reaction Time (RT) × Set Size functions whose steepness is modulated by the similarity between target and distractors. To determine whether this pattern of results was based on low-level visual factors uncontrolled by previous experiments, we minimized the possibility of crowding effects in the display, compensated for the cortical magnification factor by magnifying search items based on their eccentricity, and compared search performance on such displays to performance on displays without magnification compensation. In both cases, the RT × Set Size functions were found to be logarithmic, and the modulation of the log slopes by target-distractor similarity was replicated. Consistent with previous results in the literature, cortical magnification compensation eliminated most target eccentricity effects. We conclude that the log functions and their modulation by target-distractor similarity relations reflect a parallel exhaustive processing architecture for early vision.

  9. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  10. Visual Puzzles, Figure Weights, and Cancellation: Some Preliminary Hypotheses on the Functional and Neural Substrates of These Three New WAIS-IV Subtests

    PubMed Central

    McCrea, Simon M.; Robinson, Thomas P.

    2011-01-01

    In this study, five consecutive patients with focal strokes and/or cortical excisions were examined with the Wechsler Adult Intelligence Scale and Wechsler Memory Scale—Fourth Editions along with a comprehensive battery of other neuropsychological tasks. All five of the lesions were large and typically involved frontal, temporal, and/or parietal lobes and were lateralized to one hemisphere. The clinical case method was used to determine the cognitive neuropsychological correlates of mental rotation (Visual Puzzles), Piagetian balance beam (Figure Weights), and visual search (Cancellation) tasks. The pattern of results on Visual Puzzles and Figure Weights suggested that both subtests involve predominately right frontoparietal networks involved in visual working memory. It appeared that Visual Puzzles could also critically rely on the integrity of the left temporoparietal junction. The left temporoparietal junction could be involved in temporal ordering and integration of local elements into a nonverbal gestalt. In contrast, the Figure Weights task appears to critically involve the right temporoparietal junction involved in numerical magnitude estimation. Cancellation was sensitive to left frontotemporal lesions and not right posterior parietal lesions typical of other visual search tasks. In addition, the Cancellation subtest was sensitive to verbal search strategies and perhaps object-based attention demands, thereby constituting a unique task in comparison with previous visual search tasks. PMID:22389807

  11. Hemispheric differences in visual search of simple line arrays.

    PubMed

    Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W

    1990-01-01

    The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.

  12. Visual scan-path analysis with feature space transient fixation moments

    NASA Astrophysics Data System (ADS)

    Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong

    2003-05-01

    The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.

  13. Learning where to look: electrophysiological and behavioral indices of visual search in young and old subjects.

    PubMed

    Looren de Jong, H; Kok, A; Woestenburg, J C; Logman, C J; Van Rooy, J C

    1988-06-01

    The present investigation explores the way young and elderly subjects use regularities in target location in a visual display to guide search for targets. Although both young and old subjects show efficient use of search strategies, slight but reliable differences in reaction times suggest decreased ability in the elderly to use complex cues. Event-related potentials were very different for the young and the old. In the young, P3 amplitudes were larger on trials where the rule that governed the location of the target became evident; this was interpreted as an effect of memory updating. Enhanced positive Slow Wave amplitude indicated uncertainty in random search conditions. Elderly subjects' P3 and SW, however, seemed unrelated to behavioral performance, and they showed a large negative Slow Wave at central and parietal sites to randomly located targets. The latter finding was tentatively interpreted as a sign of increased effort in the elderly to allocate attention in visual space. This pattern of behavioral and ERP results suggests that age-related differences in search tasks can be understood in terms of changes in the strategy of allocating visual attention.

  14. The prevalence effect in lateral masking and its relevance for visual search.

    PubMed

    Geelen, B P; Wertheim, A H

    2015-04-01

    In stimulus displays with or without a single target amid 1,644 identical distractors, target prevalence was varied between 20, 50 and 80 %. Maximum gaze deviation was measured to determine the strength of lateral masking in these arrays. The results show that lateral masking was strongest in the 20 % prevalence condition, which differed significantly from both the 50 and 80 % prevalence conditions. No difference was observed between the latter two. This pattern of results corresponds to that found in the literature on the prevalence effect in visual search (stronger lateral masking corresponding to longer search times). The data add to similar findings reported earlier (Wertheim et al. in Exp Brain Res, 170:387-402, 2006), according to which the effects of many well-known factors in visual search correspond to those on lateral masking. These were the effects of set size, disjunctions versus conjunctions, display area, distractor density, the asymmetry effect (Q vs. O's) and viewing distance. The present data, taken together with those earlier findings, may lend credit to a causal hypothesis that lateral masking could be a more important mechanism in visual search than usually assumed.

  15. Immaturity of the Oculomotor Saccade and Vergence Interaction in Dyslexic Children: Evidence from a Reading and Visual Search Study

    PubMed Central

    Bucci, Maria Pia; Nassibi, Naziha; Gerard, Christophe-Loic; Bui-Quoc, Emmanuel; Seassau, Magali

    2012-01-01

    Studies comparing binocular eye movements during reading and visual search in dyslexic children are, at our knowledge, inexistent. In the present study we examined ocular motor characteristics in dyslexic children versus two groups of non dyslexic children with chronological/reading age-matched. Binocular eye movements were recorded by an infrared system (mobileEBT®, e(ye)BRAIN) in twelve dyslexic children (mean age 11 years old) and a group of chronological age-matched (N = 9) and reading age-matched (N = 10) non dyslexic children. Two visual tasks were used: text reading and visual search. Independently of the task, the ocular motor behavior in dyslexic children is similar to those reported in reading age-matched non dyslexic children: many and longer fixations as well as poor quality of binocular coordination during and after the saccades. In contrast, chronological age-matched non dyslexic children showed a small number of fixations and short duration of fixations in reading task with respect to visual search task; furthermore their saccades were well yoked in both tasks. The atypical eye movement's patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an immaturity of the ocular motor saccade and vergence systems interaction. PMID:22438934

  16. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma.

    PubMed

    Kasneci, Enkelejda; Black, Alex A; Wood, Joanne M

    2017-01-01

    To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.

  17. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma

    PubMed Central

    Black, Alex A.

    2017-01-01

    To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior. PMID:28293433

  18. Color vision but not visual attention is altered in migraine.

    PubMed

    Shepherd, Alex J

    2006-04-01

    To examine visual search performance in migraine and headache-free control groups and to determine whether reports of selective color vision deficits in migraine occur preattentively. Visual search is a classic technique to measure certain components of visual attention. The technique can be manipulated to measure both preattentive (automatic) and attentive processes. Here, visual search for colored targets was employed to extend earlier reports that the detection or discrimination of colors selective for the short-wavelength sensitive cone photoreceptors in the retina (S or "blue" cones) is impaired in migraine. Visual search performance for small and large color differences was measured in 34 migraine and 34 control participants. Small and large color differences were included to assess attentive and preattentive processing, respectively. In separate conditions, colored stimuli were chosen that would be detected selectively by either the S-, or by the long- (L or "red") and middle (M or "green")-wavelength sensitive cone photoreceptors. The results showed no preattentive differences between the migraine and control groups. For active, or attentive, search, differences between the migraine and control groups occurred for colors detected by the S-cones only, there were no differences for colors detected by the L- and M-cones. The migraine group responded significantly more slowly than the control group for the S-cone colors. The pattern of results indicates that there are no overall differences in search performance between migraine and control groups. The differences found for the S-cone colors are attributed to impaired discrimination of these colors in migraine and not to differences in attention.

  19. The effect of scleral search coil lens wear on the eye.

    PubMed

    Murphy, P J; Duncan, A L; Glennie, A J; Knox, P C

    2001-03-01

    Scleral search coils are used to measure eye movements. A recent abstract suggests that the coil can affect the eye by decreasing visual acuity, increasing intraocular pressure, and damaging the corneal and conjunctival surface. Such findings, if repeated in all subjects, would cast doubt on the credibility of the search coil as a reliable investigative technique. The aim of this study was to reassess the effect of the scleral search coil on visual function. Six volunteer subjects were selected to undergo coil wear and baseline measurements were taken of logMAR visual acuity, non-contact tonometry, keratometry, and slit lamp examination. Four drops of 0.4% benoxinate hydrochloride were instilled before insertion of the lens by an experienced clinician. The lens then remained on the eye for 30 minutes. Measurements of the four ocular health parameters were repeated after 15 and 30 minutes of lens wear. The lens was then removed and the health of the eye reassessed. No obvious pattern of change was found in logMAR visual acuity, keratometry, or intraocular pressure. The lens did produce changes to the conjunctival and corneal surfaces, but this was not considered clinically significant. Search coils do not appear to cause any significant effects on visual function. However, thorough prescreening of subjects and post-wear checks should be carried out on all coil wearers to ensure no adverse effects have been caused.

  20. The wisdom of crowds for visual search

    PubMed Central

    Juni, Mordechai Z.; Eckstein, Miguel P.

    2017-01-01

    Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision. PMID:28490500

  1. fMRI of Parents of Children with Asperger Syndrome: A Pilot Study

    ERIC Educational Resources Information Center

    Baron-Cohen, Simon; Ring, Howard; Chitnis, Xavier; Wheelwright, Sally; Gregory, Lloyd, Williams, Steve; Brammer, Mick; Bullmore, Ed

    2006-01-01

    Background: People with autism or Asperger Syndrome (AS) show altered patterns of brain activity during visual search and emotion recognition tasks. Autism and AS are genetic conditions and parents may show the "broader autism phenotype." Aims: (1) To test if parents of children with AS show atypical brain activity during a visual search…

  2. Statistical patterns of visual search for hidden objects

    PubMed Central

    Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.

    2012-01-01

    The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829

  3. Explicit awareness supports conditional visual search in the retrieval guidance paradigm.

    PubMed

    Buttaccio, Daniel R; Lange, Nicholas D; Hahn, Sowon; Thomas, Rick P

    2014-01-01

    In four experiments we explored whether participants would be able to use probabilistic prompts to simplify perceptually demanding visual search in a task we call the retrieval guidance paradigm. On each trial a memory prompt appeared prior to (and during) the search task and the diagnosticity of the prompt(s) was manipulated to provide complete, partial, or non-diagnostic information regarding the target's color on each trial (Experiments 1-3). In Experiment 1 we found that the more diagnostic prompts was associated with faster visual search performance. However, similar visual search behavior was observed in Experiment 2 when the diagnosticity of the prompts was eliminated, suggesting that participants in Experiment 1 were merely relying on base rate information to guide search and were not utilizing the prompts. In Experiment 3 participants were informed of the relationship between the prompts and the color of the target and this was associated with faster search performance relative to Experiment 1, suggesting that the participants were using the prompts to guide search. Additionally, in Experiment 3 a knowledge test was implemented and performance in this task was associated with qualitative differences in search behavior such that participants that were able to name the color(s) most associated with the prompts were faster to find the target than participants who were unable to do so. However, in Experiments 1-3 diagnosticity of the memory prompt was manipulated via base rate information, making it possible that participants were merely relying on base rate information to inform search in Experiment 3. In Experiment 4 we manipulated diagnosticity of the prompts without manipulating base rate information and found a similar pattern of results as Experiment 3. Together, the results emphasize the importance of base rate and diagnosticity information in visual search behavior. In the General discussion section we explore how a recent computational model of hypothesis generation (HyGene; Thomas, Dougherty, Sprenger, & Harbison, 2008), linking attention with long-term and working memory, accounts for the present results and provides a useful framework of cued recall visual search. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  5. Exposure to Organic Solvents Used in Dry Cleaning Reduces Low and High Level Visual Function

    PubMed Central

    Jiménez Barbosa, Ingrid Astrid

    2015-01-01

    Purpose To investigate whether exposure to occupational levels of organic solvents in the dry cleaning industry is associated with neurotoxic symptoms and visual deficits in the perception of basic visual features such as luminance contrast and colour, higher level processing of global motion and form (Experiment 1), and cognitive function as measured in a visual search task (Experiment 2). Methods The Q16 neurotoxic questionnaire, a commonly used measure of neurotoxicity (by the World Health Organization), was administered to assess the neurotoxic status of a group of 33 dry cleaners exposed to occupational levels of organic solvents (OS) and 35 age-matched non dry-cleaners who had never worked in the dry cleaning industry. In Experiment 1, to assess visual function, contrast sensitivity, colour/hue discrimination (Munsell Hue 100 test), global motion and form thresholds were assessed using computerised psychophysical tests. Sensitivity to global motion or form structure was quantified by varying the pattern coherence of global dot motion (GDM) and Glass pattern (oriented dot pairs) respectively (i.e., the percentage of dots/dot pairs that contribute to the perception of global structure). In Experiment 2, a letter visual-search task was used to measure reaction times (as a function of the number of elements: 4, 8, 16, 32, 64 and 100) in both parallel and serial search conditions. Results Dry cleaners exposed to organic solvents had significantly higher scores on the Q16 compared to non dry-cleaners indicating that dry cleaners experienced more neurotoxic symptoms on average. The contrast sensitivity function for dry cleaners was significantly lower at all spatial frequencies relative to non dry-cleaners, which is consistent with previous studies. Poorer colour discrimination performance was also noted in dry cleaners than non dry-cleaners, particularly along the blue/yellow axis. In a new finding, we report that global form and motion thresholds for dry cleaners were also significantly higher and almost double than that obtained from non dry-cleaners. However, reaction time performance on both parallel and serial visual search was not different between dry cleaners and non dry-cleaners. Conclusions Exposure to occupational levels of organic solvents is associated with neurotoxicity which is in turn associated with both low level deficits (such as the perception of contrast and discrimination of colour) and high level visual deficits such as the perception of global form and motion, but not visual search performance. The latter finding indicates that the deficits in visual function are unlikely to be due to changes in general cognitive performance. PMID:25933026

  6. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search.

    PubMed

    Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil

    2014-03-01

    Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.

  7. Short-term perceptual learning in visual conjunction search.

    PubMed

    Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong

    2014-08-01

    Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.

  8. An Empirical Study on Using Visual Embellishments in Visualization.

    PubMed

    Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min

    2012-12-01

    In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.

  9. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    PubMed Central

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  10. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    PubMed

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  11. Investigation into the visual perceptive ability of anaesthetists during ultrasound-guided interscalene and femoral blocks conducted on soft embalmed cadavers: a randomised single-blind study.

    PubMed

    Mustafa, A; Seeley, J; Munirama, S; Columb, M; McKendrick, M; Schwab, A; Corner, G; Eisma, R; Mcleod, G

    2018-04-01

    Errors may occur during regional anaesthesia whilst searching for nerves, needle tips, and test doses. Poor visual search impacts on decision making, clinical intervention, and patient safety. We conducted a randomised single-blind study in a single university hospital. Twenty trainees and two consultants examined the paired B-mode and fused B-mode and elastography video recordings of 24 interscalene and 24 femoral blocks conducted on two soft embalmed cadavers. Perineural injection was randomised equally to 0.25, 0.5, and 1.0 ml volumes. Tissue displacement perceived on both imaging modalities was defined as 'target' or 'distractor'. Our primary objective was to test the anaesthetists' perception of the number and proportion of targets and distractors on B-mode and fused elastography videos collected during femoral and sciatic nerve block on soft embalmed cadavers. Our secondary objectives were to determine the differences between novices and experts, and between test-dose volumes, and to measure the area and brightness of spread and strain patterns. All anaesthetists recognised perineural spread using 0.25 ml volumes. Distractor patterns were recognised in 133 (12%) of B-mode and in 403 (38%) of fused B-mode and elastography patterns; P<0.001. With elastography, novice recognition improved from 12 to 37% (P<0.001), and consultant recognition increased from 24 to 53%; P<0.001. Distractor recognition improved from 8 to 31% using 0.25 ml volumes (P<0.001), and from 15 to 45% using 1 ml volumes (P<0.001). Visual search improved with fusion elastography, increased volume, and consultants. A need exists to investigate image search strategies. Copyright © 2018 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.

  12. Comparing the visual spans for faces and letters

    PubMed Central

    He, Yingchen; Scholz, Jennifer M.; Gage, Rachel; Kallie, Christopher S.; Liu, Tingting; Legge, Gordon E.

    2015-01-01

    The visual span—the number of adjacent text letters that can be reliably recognized on one fixation—has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition. PMID:26129858

  13. Exploring conflict- and target-related movement of visual attention.

    PubMed

    Wendt, Mike; Garling, Marco; Luna-Rodriguez, Aquiles; Jacobsen, Thomas

    2014-01-01

    Intermixing trials of a visual search task with trials of a modified flanker task, the authors investigated whether the presentation of conflicting distractors at only one side (left or right) of a target stimulus triggers shifts of visual attention towards the contralateral side. Search time patterns provided evidence for lateral attention shifts only when participants performed the flanker task under an instruction assumed to widen the focus of attention, demonstrating that instruction-based control settings of an otherwise identical task can impact performance in an unrelated task. Contrasting conditions with response-related and response-unrelated distractors showed that shifting attention does not depend on response conflict and may be explained as stimulus-conflict-related withdrawal or target-related deployment of attention.

  14. The effect of scleral search coil lens wear on the eye

    PubMed Central

    Murphy, P.; Duncan, A.; Glennie, A.; Knox, P.

    2001-01-01

    BACKGROUND/AIM—Scleral search coils are used to measure eye movements. A recent abstract suggests that the coil can affect the eye by decreasing visual acuity, increasing intraocular pressure, and damaging the corneal and conjunctival surface. Such findings, if repeated in all subjects, would cast doubt on the credibility of the search coil as a reliable investigative technique. The aim of this study was to reassess the effect of the scleral search coil on visual function.
METHODS—Six volunteer subjects were selected to undergo coil wear and baseline measurements were taken of logMAR visual acuity, non-contact tonometry, keratometry, and slit lamp examination. Four drops of 0.4% benoxinate hydrochloride were instilled before insertion of the lens by an experienced clinician. The lens then remained on the eye for 30 minutes. Measurements of the four ocular health parameters were repeated after 15 and 30 minutes of lens wear. The lens was then removed and the health of the eye reassessed.
RESULTS—No obvious pattern of change was found in logMAR visual acuity, keratometry, or intraocular pressure. The lens did produce changes to the conjunctival and corneal surfaces, but this was not considered clinically significant.
CONCLUSION—Search coils do not appear to cause any significant effects on visual function. However, thorough prescreening of subjects and post-wear checks should be carried out on all coil wearers to ensure no adverse effects have been caused.

 PMID:11222341

  15. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search

    PubMed Central

    Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil

    2014-01-01

    Background Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. Aims The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Materials and methods Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. Results The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. Conclusion We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI. PMID:24683515

  16. Data Flow Analysis and Visualization for Spatiotemporal Statistical Data without Trajectory Information.

    PubMed

    Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S

    2018-03-01

    Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.

  17. Cube search, revisited.

    PubMed

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-03-16

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with "equivalent" 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. © 2015 ARVO.

  18. Cube search, revisited

    PubMed Central

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-01-01

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063

  19. Contextual cueing in 3D visual search depends on representations in planar-, not depth-defined space.

    PubMed

    Zang, Xuelian; Shi, Zhuanghua; Müller, Hermann J; Conci, Markus

    2017-05-01

    Learning of spatial inter-item associations can speed up visual search in everyday life, an effect referred to as contextual cueing (Chun & Jiang, 1998). Whereas previous studies investigated contextual cueing primarily using 2D layouts, the current study examined how 3D depth influences contextual learning in visual search. In two experiments, the search items were presented evenly distributed across front and back planes in an initial training session. In the subsequent test session, the search items were either swapped between the front and back planes (Experiment 1) or between the left and right halves (Experiment 2) of the displays. The results showed that repeated spatial contexts were learned efficiently under 3D viewing conditions, facilitating search in the training sessions, in both experiments. Importantly, contextual cueing remained robust and virtually unaffected following the swap of depth planes in Experiment 1, but it was substantially reduced (to nonsignificant levels) following the left-right side swap in Experiment 2. This result pattern indicates that spatial, but not depth, inter-item variations limit effective contextual guidance. Restated, contextual cueing (even under 3D viewing conditions) is primarily based on 2D inter-item associations, while depth-defined spatial regularities are probably not encoded during contextual learning. Hence, changing the depth relations does not impact the cueing effect.

  20. An integrated measure of display clutter based on feature content, user knowledge and attention allocation factors.

    PubMed

    Pankok, Carl; Kaber, David B

    2018-05-01

    Existing measures of display clutter in the literature generally exhibit weak correlations with task performance, which limits their utility in safety-critical domains. A literature review led to formulation of an integrated display data- and user knowledge-driven measure of display clutter. A driving simulation experiment was conducted in which participants were asked to search 'high' and 'low' clutter displays for navigation information. Data-driven measures and subjective perceptions of clutter were collected along with patterns of visual attention allocation and driving performance responses during time periods in which participants searched the navigation display for information. The new integrated measure was more strongly correlated with driving performance than other, previously developed measures of clutter, particularly in the case of low-clutter displays. Integrating display data and user knowledge factors with patterns of visual attention allocation shows promise for measuring display clutter and correlation with task performance, particularly for low-clutter displays. Practitioner Summary: A novel measure of display clutter was formulated, accounting for display data content, user knowledge states and patterns of visual attention allocation. The measure was evaluated in terms of correlations with driver performance in a safety-critical driving simulation study. The measure exhibited stronger correlations with task performance than previously defined measures.

  1. Neural representations of contextual guidance in visual search of real-world scenes.

    PubMed

    Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P

    2013-05-01

    Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes.

  2. Pilots' Visual Scan Patterns and Attention Distribution During the Pursuit of a Dynamic Target.

    PubMed

    Yu, Chung-San; Wang, Eric Min-Yang; Li, Wen-Chin; Braithwaite, Graham; Greaves, Matthew

    2016-01-01

    The current research was to investigate pilots' visual scan patterns in order to assess attention distribution during air-to-air maneuvers. A total of 30 qualified mission-ready fighter pilots participated in this research. Eye movement data were collected by a portable head-mounted eye-tracking device, combined with a jet fighter simulator. To complete the task, pilots had to search for, pursue, and lock on a moving target while performing air-to-air tasks. There were significant differences in pilots' saccade duration (ms) in three operating phases, including searching (M = 241, SD = 332), pursuing (M = 311, SD = 392), and lock-on (M = 191, SD = 226). Also, there were significant differences in pilots' pupil sizes (pixel(2)), of which the lock-on phase was the largest (M = 27,237, SD = 6457), followed by pursuit (M = 26,232, SD = 6070), then searching (M = 25,858, SD = 6137). Furthermore, there were significant differences between expert and novice pilots in the percentage of fixation on the head-up display (HUD), time spent looking outside the cockpit, and the performance of situational awareness (SA). Experienced pilots have better SA performance and paid more attention to the HUD, but focused less outside the cockpit when compared with novice pilots. Furthermore, pilots with better SA performance exhibited a smaller pupil size during the operational phase of lock on while pursuing a dynamic target. Understanding pilots' visual scan patterns and attention distribution are beneficial to the design of interface displays in the cockpit and in developing human factors training syllabi to improve the safety of flight operations.

  3. Effects of Alzheimer’s Disease on Visual Target Detection: A “Peripheral Bias”

    PubMed Central

    Vallejo, Vanessa; Cazzoli, Dario; Rampa, Luca; Zito, Giuseppe A.; Feuerstein, Flurin; Gruber, Nicole; Müri, René M.; Mosimann, Urs P.; Nef, Tobias

    2016-01-01

    Visual exploration is an omnipresent activity in everyday life, and might represent an important determinant of visual attention deficits in patients with Alzheimer’s Disease (AD). The present study aimed at investigating visual search performance in AD patients, in particular target detection in the far periphery, in daily living scenes. Eighteen AD patients and 20 healthy controls participated in the study. They were asked to freely explore a hemispherical screen, covering ±90°, and to respond to targets presented at 10°, 30°, and 50° eccentricity, while their eye movements were recorded. Compared to healthy controls, AD patients recognized less targets appearing in the center. No difference was found in target detection in the periphery. This pattern was confirmed by the fixation distribution analysis. These results show a neglect for the central part of the visual field for AD patients and provide new insights by mean of a search task involving a larger field of view. PMID:27582704

  4. Effects of Alzheimer's Disease on Visual Target Detection: A "Peripheral Bias".

    PubMed

    Vallejo, Vanessa; Cazzoli, Dario; Rampa, Luca; Zito, Giuseppe A; Feuerstein, Flurin; Gruber, Nicole; Müri, René M; Mosimann, Urs P; Nef, Tobias

    2016-01-01

    Visual exploration is an omnipresent activity in everyday life, and might represent an important determinant of visual attention deficits in patients with Alzheimer's Disease (AD). The present study aimed at investigating visual search performance in AD patients, in particular target detection in the far periphery, in daily living scenes. Eighteen AD patients and 20 healthy controls participated in the study. They were asked to freely explore a hemispherical screen, covering ±90°, and to respond to targets presented at 10°, 30°, and 50° eccentricity, while their eye movements were recorded. Compared to healthy controls, AD patients recognized less targets appearing in the center. No difference was found in target detection in the periphery. This pattern was confirmed by the fixation distribution analysis. These results show a neglect for the central part of the visual field for AD patients and provide new insights by mean of a search task involving a larger field of view.

  5. Perceptual integration of motion and form information: evidence of parallel-continuous processing.

    PubMed

    von Mühlenen, A; Müller, H J

    2000-04-01

    In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).

  6. Visual Search for Motion-Form Conjunctions: Selective Attention to Movement Direction.

    PubMed

    Von Mühlenen, Adrian; Müller, Hermann J

    1999-07-01

    In 2 experiments requiring visual search for conjunctions of motion and form, the authors reinvestigated whether motion-based filtering (e.g., P. McLeod, J. Driver, Z. Dienes, & J. Crisp, 1991) is direction selective and whether cuing of the target direction promotes efficient search performance. In both experiments, the authors varied the number of movement directions in the display and the predictability of the target direction. Search was less efficient when items moved in multiple (2, 3, and 4) directions as compared with just 1 direction. Furthermore, precuing of the target direction facilitated the search, even with "wrap-around" displays, relatively more when items moved in multiple directions. The authors proposed 2 principles to explain that pattern of effects: (a) interference on direction computation between items moving in different directions (e.g., N. Qian & R. A. Andersen, 1994) and (b) selective direction tuning of motion detectors involving a receptive-field contraction (cf. J. Moran & R. Desimone, 1985; S. Treue & J. H. R. Maunsell, 1996).

  7. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  8. fMRI of parents of children with Asperger Syndrome: a pilot study.

    PubMed

    Baron-Cohen, Simon; Ring, Howard; Chitnis, Xavier; Wheelwright, Sally; Gregory, Lloyd; Williams, Steve; Brammer, Mick; Bullmore, Ed

    2006-06-01

    People with autism or Asperger Syndrome (AS) show altered patterns of brain activity during visual search and emotion recognition tasks. Autism and AS are genetic conditions and parents may show the 'broader autism phenotype.' (1) To test if parents of children with AS show atypical brain activity during a visual search and an empathy task; (2) to test for sex differences during these tasks at the neural level; (3) to test if parents of children with autism are hyper-masculinized, as might be predicted by the 'extreme male brain' theory. We used fMRI during a visual search task (the Embedded Figures Test (EFT)) and an emotion recognition test (the 'Reading the Mind in the Eyes' (or Eyes) test). Twelve parents of children with AS, vs. 12 sex-matched controls. Factorial analysis was used to map main effects of sex, group (parents vs. controls), and sexxgroup interaction on brain function. An ordinal ANOVA also tested for regions of brain activity where females>males>fathers=mothers, to test for parental hyper-masculinization. RESULTS ON EFT TASK: Female controls showed more activity in extrastriate cortex than male controls, and both mothers and fathers showed even less activity in this area than sex-matched controls. There were no differences in group activation between mothers and fathers of children with AS. The ordinal ANOVA identified two specific regions in visual cortex (right and left, respectively) that showed the pattern Females>Males>Fathers=Mothers, both in BA 19. RESULTS ON EYES TASK: Male controls showed more activity in the left inferior frontal gyrus than female controls, and both mothers and fathers showed even more activity in this area compared to sex-matched controls. Female controls showed greater bilateral inferior frontal activation than males. This was not seen when comparing mothers to males, or mothers to fathers. The ordinal ANOVA identified two specific regions that showed the pattern Females>Males>Mothers=Fathers: left medial temporal gyrus (BA 21) and left dorsolateral prefrontal cortex (BA 44). Parents of children with AS show atypical brain function during both visual search and emotion recognition, in the direction of hyper-masculinization of the brain. Because of the small sample size, and lack of age-matching between parents and controls, such results constitute a pilot study that needs replicating with larger samples.

  9. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    NASA Astrophysics Data System (ADS)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  10. Fractal Analysis of Visual Search Activity for Mass Detection During Mammographic Screening

    DOE PAGES

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; ...

    2017-02-21

    Purpose: The objective of this study was to assess the complexity of human visual search activity during mammographic screening using fractal analysis and to investigate its relationship with case and reader characteristics. Methods: The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus data for this study. The fractal dimension of the readers’ visual scanning patternsmore » was computed with the Minkowski–Bouligand box-counting method and used as a measure of gaze complexity. Individual factor and group-based interaction ANOVA analysis was performed to study the association between fractal dimension, case pathology, breast density, and reader experience level. The consistency of the observed trends depending on gaze data representation was also examined. Results: Case pathology, breast density, reader experience level, and individual reader differences are all independent predictors of the visual scanning pattern complexity when screening for breast cancer. No higher order effects were found to be significant. Conclusions: Fractal characterization of visual search behavior during mammographic screening is dependent on case properties and image reader characteristics.« less

  11. Lifespan changes in attention revisited: Everyday visual search.

    PubMed

    Brennan, Allison A; Bruderer, Alison J; Liu-Ambrose, Teresa; Handy, Todd C; Enns, James T

    2017-06-01

    This study compared visual search under everyday conditions among participants across the life span (healthy participants in 4 groups, with average age of 6 years, 8 years, 22 years, and 75 years, and 1 group averaging 73 years with a history of falling). The task involved opening a door and stepping into a room find 1 of 4 everyday objects (apple, golf ball, coffee can, toy penguin) visible on shelves. The background for this study included 2 well-cited laboratory studies that pointed to different cognitive mechanisms underlying each end of the U-shaped pattern of visual search over the life span (Hommel et al., 2004; Trick & Enns, 1998). The results recapitulated some of the main findings of the laboratory study (e.g., a U-shaped function, dissociable factors for maturation and aging), but there were several unique findings. These included large differences in the baseline salience of common objects at different ages, visual eccentricity effects that were unique to aging, and visual field effects that interacted strongly with age. These findings highlight the importance of studying cognitive processes in more natural settings, where factors such as personal relevance, life history, and bodily contributions to cognition (e.g., limb, head, and body movements) are more readily revealed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Visual search in Dementia with Lewy Bodies and Alzheimer's disease.

    PubMed

    Landy, Kelly M; Salmon, David P; Filoteo, J Vincent; Heindel, William C; Galasko, Douglas; Hamilton, Joanne M

    2015-12-01

    Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer's disease (AD). To assess this possibility, the present study compared patients with DLB (n = 17), AD (n = 30), or Parkinson's disease with dementia (PDD; n = 10) to non-demented patients with PD (n = 18) and normal control (NC) participants (n = 13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target's salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., "pop-out" effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search "pop-out" effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Visual Search in Dementia with Lewy Bodies and Alzheimer’s Disease

    PubMed Central

    Landy, Kelly M.; Salmon, David P.; Filoteo, J. Vincent; Heindel, William C.; Galasko, Douglas; Hamilton, Joanne M.

    2016-01-01

    Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer’s disease (AD). To assess this possibility, the present study compared patients with DLB (n=17), AD (n=30), or Parkinson’s disease with dementia (PDD; n=10) to non-demented patients with PD (n=18) and normal control (NC) participants (n=13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target’s salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., “pop-out” effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search “pop-out” effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex. PMID:26476402

  14. A method for real-time visual stimulus selection in the study of cortical object perception.

    PubMed

    Leeds, Daniel D; Tarr, Michael J

    2016-06-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. A method for real-time visual stimulus selection in the study of cortical object perception

    PubMed Central

    Leeds, Daniel D.; Tarr, Michael J.

    2016-01-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168

  16. The Rotated Speeded-Up Robust Features Algorithm (R-SURF)

    DTIC Science & Technology

    2014-06-01

    blue color model YUV one luminance two chrominance color model xviii THIS PAGE INTENTIONALLY LEFT BLANK xix EXECUTIVE SUMMARY Automatic...256 256 3  color scheme with an uncompressed image is used, each visual pixel has a possibility of 3256 combinations 2 [5]. There are...Portugal, 2009. [41] J. Sivic and A. Zisserman, “Efficient visual search of videos cast as text retrieval,” IEEE Transactions on Pattern Analysis and

  17. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-11-04

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.

  18. Preattentive visual search and perceptual grouping in schizophrenia.

    PubMed

    Carr, V J; Dewis, S A; Lewin, T J

    1998-06-15

    To help determine whether patients with schizophrenia show deficits in the stimulus-based aspects of preattentive processing, we undertook a series of experiments within the framework of feature integration theory. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing parallel and serial information processing (Experiment 1) and a task which examined the effects of perceptual grouping on visual search strategies (Experiment 2). We also assessed current symptomatology and its relationship to task performance. While the schizophrenia subjects had longer reaction times in Experiment 1, their overall pattern of performance across both experimental tasks was similar to that of the control subjects, and generally unrelated to current symptomatology. Predictions from feature integration theory about the impact of varying display size (Experiment 1) and number of perceptual groups (Experiment 2) on the detection of feature and conjunction targets were strongly supported. This study revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics. While subject and task characteristics may partially account for differences between this and previous studies, it is more likely that preattentive processing abnormalities in schizophrenia may occur only under conditions involving selected 'top-down' factors such as context and meaning.

  19. Understanding visual search patterns of dermatologists assessing pigmented skin lesions before and after online training.

    PubMed

    Krupinski, Elizabeth A; Chao, Joseph; Hofmann-Wellenhof, Rainer; Morrison, Lynne; Curiel-Lewandrowski, Clara

    2014-12-01

    The goal of this investigation was to explore the feasibility of characterizing the visual search characteristics of dermatologists evaluating images corresponding to single pigmented skin lesions (PSLs) (close-ups and dermoscopy) as a venue to improve training programs for dermoscopy. Two Board-certified dermatologists and two dermatology residents participated in a phased study. In phase I, they viewed a series of 20 PSL cases ranging from benign nevi to melanoma. The close-up and dermoscopy images of the PSL were evaluated sequentially and rated individually as benign or malignant, while eye position was recorded. Subsequently, the participating subjects completed an online dermoscopy training module that included a pre- and post-test assessing their dermoscopy skills (phase 2). Three months later, the subjects repeated their assessment on the 20 PSLs presented during phase I of the study. Significant differences in viewing time and eye-position parameters were observed as a function of level of expertise. Dermatologists overall have more efficient search than residents generating fewer fixations with shorter dwells. Fixations and dwells associated with decisions changing from benign to malignant or vice versa from photo to dermatoscopic viewing were longer than any other decision, indicating increased visual processing for those decisions. These differences in visual search may have implications for developing tools to teach dermatologists and residents about how to better utilize dermoscopy in clinical practice.

  20. Measuring the interrelations among multiple paradigms of visual attention: an individual differences approach.

    PubMed

    Huang, Liqiang; Mo, Lei; Li, Ying

    2012-04-01

    A large part of the empirical research in the field of visual attention has focused on various concrete paradigms. However, as yet, there has been no clear demonstration of whether or not these paradigms are indeed measuring the same underlying construct. We collected a very large data set (nearly 1.3 million trials) to address this question. We tested 257 participants on nine paradigms: conjunction search, configuration search, counting, tracking, feature access, spatial pattern, response selection, visual short-term memory, and change blindness. A fairly general attention factor was identified. Some of the participants were also tested on eight other paradigms. This general attention factor was found to be correlated with intelligence, visual marking, task switching, mental rotation, and Stroop task. On the other hand, a few paradigms that are very important in the attention literature (attentional capture, consonance-driven orienting, and inhibition of return) were found to be dissociated from this general attention factor.

  1. Beyond Information Retrieval: Ways To Provide Content in Context.

    ERIC Educational Resources Information Center

    Wiley, Deborah Lynne

    1998-01-01

    Provides an overview of information retrieval from mainframe systems to Web search engines; discusses collaborative filtering, data extraction, data visualization, agent technology, pattern recognition, classification and clustering, and virtual communities. Argues that rather than huge data-storage centers and proprietary software, we need…

  2. Selective scanpath repetition during memory-guided visual search.

    PubMed

    Wynn, Jordana S; Bone, Michael B; Dragan, Michelle C; Hoffman, Kari L; Buchsbaum, Bradley R; Ryan, Jennifer D

    2016-01-02

    Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or "scanpath" elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1-V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity.

  3. Task relevance modulates the cortical representation of feature conjunctions in the target template.

    PubMed

    Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan

    2017-07-03

    Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.

  4. Search path of a fossorial herbivore, Geomys bursarius, foraging in structurally complex plant communities

    USGS Publications Warehouse

    Andersen, Douglas C.

    1990-01-01

    The influence of habitat patchiness and unpalatable plants on the search path of the plains pocket gopher (Geomys bursarius) was examined in outdoor enclosures. Separate experiments were used to evaluate how individual animals explored (by tunnel excavation) enclosures free of plants except for one or more dense patches of a palatable plant (Daucus carota), a dense patch of an unpalatable species (Pastinaca sativa) containing a few palatable plants (D. carota), or a relatively sparse mixture of palatable (D. carota) and unpalatable (Raphanus sativus) species. Only two of eight individuals tested showed the predicted pattern of concentrating search effort in patches of palatable plants. The maintenance of relatively high levels of effort in less profitable sites may reflect the security afforded food resources by the solitary social system and fossorial lifestyle of G. bursarius. Unpalatable plants repelled animals under some conditions, but search paths in the sparsely planted mixed-species treatment suggest animals can use visual or other cues to orient excavations. Evidence supporting area-restricted search was weak. More information about the use of visual cues by G. bursarius and the influence of experience on individual search mode is needed for refining current models of foraging behavior in this species.

  5. GazeAppraise v. 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel

    GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths.more » An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.« less

  6. Common capacity-limited neural mechanisms of selective attention and spatial working memory encoding

    PubMed Central

    Fusser, Fabian; Linden, David E J; Rahm, Benjamin; Hampel, Harald; Haenschel, Corinna; Mayer, Jutta S

    2011-01-01

    One characteristic feature of visual working memory (WM) is its limited capacity, and selective attention has been implicated as limiting factor. A possible reason why attention constrains the number of items that can be encoded into WM is that the two processes share limited neural resources. Functional magnetic resonance imaging (fMRI) studies have indeed demonstrated commonalities between the neural substrates of WM and attention. Here we investigated whether such overlapping activations reflect interacting neural mechanisms that could result in capacity limitations. To independently manipulate the demands on attention and WM encoding within one single task, we combined visual search and delayed discrimination of spatial locations. Participants were presented with a search array and performed easy or difficult visual search in order to encode one, three or five positions of target items into WM. Our fMRI data revealed colocalised activation for attention-demanding visual search and WM encoding in distributed posterior and frontal regions. However, further analysis yielded two patterns of results. Activity in prefrontal regions increased additively with increased demands on WM and attention, indicating regional overlap without functional interaction. Conversely, the WM load-dependent activation in visual, parietal and premotor regions was severely reduced during high attentional demand. We interpret this interaction as indicating the sites of shared capacity-limited neural resources. Our findings point to differential contributions of prefrontal and posterior regions to the common neural mechanisms that support spatial WM encoding and attention, providing new imaging evidence for attention-based models of WM encoding. PMID:21781193

  7. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors

    PubMed Central

    Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513

  8. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors.

    PubMed

    Sung, Kyongje; Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.

  9. The development of organized visual search

    PubMed Central

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  10. GEsture: an online hand-drawing tool for gene expression pattern search.

    PubMed

    Wang, Chunyan; Xu, Yiqing; Wang, Xuelin; Zhang, Li; Wei, Suyun; Ye, Qiaolin; Zhu, Youxiang; Yin, Hengfu; Nainwal, Manoj; Tanon-Reyes, Luis; Cheng, Feng; Yin, Tongming; Ye, Ning

    2018-01-01

    Gene expression profiling data provide useful information for the investigation of biological function and process. However, identifying a specific expression pattern from extensive time series gene expression data is not an easy task. Clustering, a popular method, is often used to classify similar expression genes, however, genes with a 'desirable' or 'user-defined' pattern cannot be efficiently detected by clustering methods. To address these limitations, we developed an online tool called GEsture. Users can draw, or graph a curve using a mouse instead of inputting abstract parameters of clustering methods. GEsture explores genes showing similar, opposite and time-delay expression patterns with a gene expression curve as input from time series datasets. We presented three examples that illustrate the capacity of GEsture in gene hunting while following users' requirements. GEsture also provides visualization tools (such as expression pattern figure, heat map and correlation network) to display the searching results. The result outputs may provide useful information for researchers to understand the targets, function and biological processes of the involved genes.

  11. Parallel Processing in Visual Search Asymmetry

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2004-01-01

    The difficulty of visual search may depend on assignment of the same visual elements as targets and distractors-search asymmetry. Easy C-in-O searches and difficult O-in-C searches are often associated with parallel and serial search, respectively. Here, the time course of visual search was measured for both tasks with speed-accuracy methods. The…

  12. Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.

    PubMed

    Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus

    2016-11-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Edge enhancement improves disruptive camouflage by emphasising false edges and creating pictorial relief

    PubMed Central

    Egan, John; Sharman, Rebecca J.; Scott-Brown, Kenneth C.; Lovell, Paul George

    2016-01-01

    Disruptive colouration is a visual camouflage composed of false edges and boundaries. Many disruptively camouflaged animals feature enhanced edges; light patches are surrounded by a lighter outline and/or a dark patches are surrounded by a darker outline. This camouflage is particularly common in amphibians, reptiles and lepidopterans. We explored the role that this pattern has in creating effective camouflage. In a visual search task utilising an ultra-large display area mimicking search tasks that might be found in nature, edge enhanced disruptive camouflage increases crypsis, even on substrates that do not provide an obvious visual match. Specifically, edge enhanced camouflage is effective on backgrounds both with and without shadows; i.e. this is not solely due to background matching of the dark edge enhancement element with the shadows. Furthermore, when the dark component of the edge enhancement is omitted the camouflage still provided better crypsis than control patterns without edge enhancement. This kind of edge enhancement improved camouflage on all background types. Lastly, we show that edge enhancement can create a perception of multiple surfaces. We conclude that edge enhancement increases the effectiveness of disruptive camouflage through mechanisms that may include the improved disruption of the object outline by implying pictorial relief. PMID:27922058

  14. Edge enhancement improves disruptive camouflage by emphasising false edges and creating pictorial relief.

    PubMed

    Egan, John; Sharman, Rebecca J; Scott-Brown, Kenneth C; Lovell, Paul George

    2016-12-06

    Disruptive colouration is a visual camouflage composed of false edges and boundaries. Many disruptively camouflaged animals feature enhanced edges; light patches are surrounded by a lighter outline and/or a dark patches are surrounded by a darker outline. This camouflage is particularly common in amphibians, reptiles and lepidopterans. We explored the role that this pattern has in creating effective camouflage. In a visual search task utilising an ultra-large display area mimicking search tasks that might be found in nature, edge enhanced disruptive camouflage increases crypsis, even on substrates that do not provide an obvious visual match. Specifically, edge enhanced camouflage is effective on backgrounds both with and without shadows; i.e. this is not solely due to background matching of the dark edge enhancement element with the shadows. Furthermore, when the dark component of the edge enhancement is omitted the camouflage still provided better crypsis than control patterns without edge enhancement. This kind of edge enhancement improved camouflage on all background types. Lastly, we show that edge enhancement can create a perception of multiple surfaces. We conclude that edge enhancement increases the effectiveness of disruptive camouflage through mechanisms that may include the improved disruption of the object outline by implying pictorial relief.

  15. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    PubMed

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.

  16. Selective scanpath repetition during memory-guided visual search

    PubMed Central

    Wynn, Jordana S.; Bone, Michael B.; Dragan, Michelle C.; Hoffman, Kari L.; Buchsbaum, Bradley R.; Ryan, Jennifer D.

    2016-01-01

    ABSTRACT Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or “scanpath” elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1–V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity. PMID:27570471

  17. Effect of gravito-inertial cues on the coding of orientation in pre-attentive vision.

    PubMed

    Stivalet, P; Marendaz, C; Barraclough, L; Mourareau, C

    1995-01-01

    To see if the spatial reference frame used by pre-attentive vision is specified in a retino-centered frame or in a reference frame integrating visual and nonvisual information (vestibular and somatosensory), subjects were centrifuged in a non-pendular cabin and were asked to search for a target distinguishable from distractors by difference in orientation (Treisman's "pop-out" paradigm [1]). In a control condition, in which subjects were sitting immobilized but not centrifuged, this task gave an asymmetric search pattern: Search was rapid and pre-attentional except when the target was aligned with the horizontal retinal/head axis, in which case search was slow and attentional (2). Results using a centrifuge showed that slow/serial search patterns were obtained when the target was aligned with the subjective horizontal axis (and not with the horizontal retinal/head axis). These data suggest that a multisensory reference frame is used in pre-attentive vision. The results are interpreted in terms of Riccio and Stoffregen's "ecological theory" of orientation in which the vertical and horizontal axes constitute independent reference frames (3).

  18. Visual search performance among persons with schizophrenia as a function of target eccentricity.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2010-03-01

    The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved

  19. Multiplicative processes in visual cognition

    NASA Astrophysics Data System (ADS)

    Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.

    2014-03-01

    The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.

  20. Increasing Navigation Speed at Endoluminal CT Colonography Reduces Colonic Visualization and Polyp Identification.

    PubMed

    Plumb, Andrew A; Phillips, Peter; Spence, Graeme; Mallett, Susan; Taylor, Stuart A; Halligan, Steve; Fanshawe, Thomas

    2017-08-01

    Purpose To investigate the effect of increasing navigation speed on the visual search and decision making during polyp identification for computed tomography (CT) colonography Materials and Methods Institutional review board permission was obtained to use deidentified CT colonography data for this prospective reader study. After obtaining informed consent from the readers, 12 CT colonography fly-through examinations that depicted eight polyps were presented at four different fixed navigation speeds to 23 radiologists. Speeds ranged from 1 cm/sec to 4.5 cm/sec. Gaze position was tracked by using an infrared eye tracker, and readers indicated that they saw a polyp by clicking a mouse. Patterns of searching and decision making by speed were investigated graphically and by multilevel modeling. Results Readers identified polyps correctly in 56 of 77 (72.7%) of viewings at the slowest speed but in only 137 of 225 (60.9%) of viewings at the fastest speed (P = .004). They also identified fewer false-positive features at faster speeds (42 of 115; 36.5%) of videos at slowest speed, 89 of 345 (25.8%) at fastest, P = .02). Gaze location was highly concentrated toward the central quarter of the screen area at faster speeds (mean gaze points at slowest speed vs fastest speed, 86% vs 97%, respectively). Conclusion Faster navigation speed at endoluminal CT colonography led to progressive restriction of visual search patterns. Greater speed also reduced both true-positive and false-positive colorectal polyp identification. © RSNA, 2017 Online supplemental material is available for this article.

  1. Perceptual learning in visual search: fast, enduring, but non-specific.

    PubMed

    Sireteanu, R; Rettenbach, R

    1995-07-01

    Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.

  2. Independent and additive repetition priming of motion direction and color in visual search.

    PubMed

    Kristjánsson, Arni

    2009-03-01

    Priming of visual search for Gabor patch stimuli, varying in color and local drift direction, was investigated. The task relevance of each feature varied between the different experimental conditions compared. When the target defining dimension was color, a large effect of color repetition was seen as well as a smaller effect of the repetition of motion direction. The opposite priming pattern was seen when motion direction defined the target--the effect of motion direction repetition was this time larger than for color repetition. Finally, when neither was task relevant, and the target defining dimension was the spatial frequency of the Gabor patch, priming was seen for repetition of both color and motion direction, but the effects were smaller than in the previous two conditions. These results show that features do not necessarily have to be task relevant for priming to occur. There is little interaction between priming following repetition of color and motion, these two features show independent and additive priming effects, most likely reflecting that the two features are processed at separate processing sites in the nervous system, consistent with previous findings from neuropsychology & neurophysiology. The implications of the findings for theoretical accounts of priming in visual search are discussed.

  3. Fast ITTBC using pattern code on subband segmentation

    NASA Astrophysics Data System (ADS)

    Koh, Sung S.; Kim, Hanchil; Lee, Kooyoung; Kim, Hongbin; Jeong, Hun; Cho, Gangseok; Kim, Chunghwa

    2000-06-01

    Iterated Transformation Theory-Based Coding suffers from very high computational complexity in encoding phase. This is due to its exhaustive search. In this paper, our proposed image coding algorithm preprocess an original image to subband segmentation image by wavelet transform before image coding to reduce encoding complexity. A similar block is searched by using the 24 block pattern codes which are coded by the edge information in the image block on the domain pool of the subband segmentation. As a result, numerical data shows that the encoding time of the proposed coding method can be reduced to 98.82% of that of Joaquin's method, while the loss in quality relative to the Jacquin's is about 0.28 dB in PSNR, which is visually negligible.

  4. Modeling guidance and recognition in categorical search: bridging human and computer object detection.

    PubMed

    Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris

    2013-10-08

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.

  5. Modeling guidance and recognition in categorical search: Bridging human and computer object detection

    PubMed Central

    Zelinsky, Gregory J.; Peng, Yifan; Berg, Alexander C.; Samaras, Dimitris

    2013-01-01

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. PMID:24105460

  6. Context and competition in the capture of visual attention.

    PubMed

    Hickey, Clayton; Theeuwes, Jan

    2011-10-01

    Competition-based models of visual attention propose that perceptual ambiguity is resolved through inhibition, which is stronger when objects share a greater number of neural receptive fields (RFs). According to this theory, the misallocation of attention to a salient distractor--that is, the capture of attention--can be indexed in RF-scaled interference costs. We used this pattern to investigate distractor-related costs in visual search across several manipulations of temporal context. Distractor costs are generally larger under circumstances in which the distractor can be defined by features that have recently characterised the target, suggesting that capture occurs in these trials. However, our results show that search for a target in the presence of a salient distractor also produces RF-scaled costs when the features defining the target and distractor do not vary from trial to trial. Contextual differences in distractor costs appear to reflect something other than capture, perhaps a qualitative difference in the type of attentional mechanism deployed to the distractor.

  7. Gaze and visual search strategies of children with Asperger syndrome/high functioning autism viewing a magic trick.

    PubMed

    Joosten, Annette; Girdler, Sonya; Albrecht, Matthew A; Horlin, Chiara; Falkmer, Marita; Leung, Denise; Ordqvist, Anna; Fleischer, Håkan; Falkmer, Torbjörn

    2016-01-01

    To examine visual search patterns and strategies used by children with and without Asperger syndrome/high functioning autism (AS/HFA) while watching a magic trick. Limited responsivity to gaze cues is hypothesised to contribute to social deficits in children with AS/HFA. Twenty-one children with AS/HFA and 31 matched peers viewed a video of a gaze-cued magic trick twice. Between the viewings, they were informed about how the trick was performed. Participants' eye movements were recorded using a head-mounted eye-tracker. Children with AS/HFA looked less frequently and had shorter fixation on the magician's direct and averted gazes during both viewings and more frequently at not gaze-cued objects and on areas outside the magician's face. After being informed of how the trick was conducted, both groups made fewer fixations on gaze-cued objects and direct gaze. Information may enhance effective visual strategies in children with and without AS/HFA.

  8. How does visual thinking work in the mind of a person with autism? A personal account.

    PubMed

    Grandin, Temple

    2009-05-27

    My mind is similar to an Internet search engine that searches for photographs. I use language to narrate the photo-realistic pictures that pop up in my imagination. When I design equipment for the cattle industry, I can test run it in my imagination similar to a virtual reality computer program. All my thinking is associative and not linear. To form concepts, I sort pictures into categories similar to computer files. To form the concept of orange, I see many different orange objects, such as oranges, pumpkins, orange juice and marmalade. I have observed that there are three different specialized autistic/Asperger cognitive types. They are: (i) visual thinkers such as I who are often poor at algebra, (ii) pattern thinkers such as Daniel Tammet who excel in math and music but may have problems with reading or writing composition, and (iii) verbal specialists who are good at talking and writing but they lack visual skills.

  9. How does visual thinking work in the mind of a person with autism? A personal account

    PubMed Central

    Grandin, Temple

    2009-01-01

    My mind is similar to an Internet search engine that searches for photographs. I use language to narrate the photo-realistic pictures that pop up in my imagination. When I design equipment for the cattle industry, I can test run it in my imagination similar to a virtual reality computer program. All my thinking is associative and not linear. To form concepts, I sort pictures into categories similar to computer files. To form the concept of orange, I see many different orange objects, such as oranges, pumpkins, orange juice and marmalade. I have observed that there are three different specialized autistic/Asperger cognitive types. They are: (i) visual thinkers such as I who are often poor at algebra, (ii) pattern thinkers such as Daniel Tammet who excel in math and music but may have problems with reading or writing composition, and (iii) verbal specialists who are good at talking and writing but they lack visual skills. PMID:19528028

  10. Spatial context learning survives interference from working memory load

    PubMed Central

    Vickery, Timothy J.; Sussman, Rachel S.; Jiang, Yuhong V.

    2010-01-01

    The human visual system is constantly confronted with an overwhelming amount of information, only a subset of which can be processed in complete detail. Attention and implicit learning are two important mechanisms that optimize vision. This study addresses the relationship between these two mechanisms. Specifically we ask: Is implicit learning of spatial context affected by the amount of working memory load devoted to an irrelevant task? We tested observers in visual search tasks where search displays occasionally repeated. Observers became faster searching repeated displays than unrepeated ones, showing contextual cueing. We found that the size of contextual cueing was unaffected by whether observers learned repeated displays under unitary attention or when their attention was divided using working memory manipulations. These results held when working memory was loaded by colors, dot patterns, individual dot locations, or multiple potential targets. We conclude that spatial context learning is robust to interference from manipulations that limit the availability of attention and working memory. PMID:20853996

  11. Effect of verbal instructions and image size on visual search strategies in basketball free throw shooting.

    PubMed

    Al-Abood, Saleh A; Bennett, Simon J; Hernandez, Francisco Moreno; Ashford, Derek; Davids, Keith

    2002-03-01

    We assessed the effects on basketball free throw performance of two types of verbal directions with an external attentional focus. Novices (n = 16) were pre-tested on free throw performance and assigned to two groups of similar ability (n = 8 in each). Both groups received verbal instructions with an external focus on either movement dynamics (movement form) or movement effects (e.g. ball trajectory relative to basket). The participants also observed a skilled model performing the task on either a small or large screen monitor, to ascertain the effects of visual presentation mode on task performance. After observation of six videotaped trials, all participants were given a post-test. Visual search patterns were monitored during observation and cross-referenced with performance on the pre- and post-test. Group effects were noted for verbal instructions and image size on visual search strategies and free throw performance. The 'movement effects' group saw a significant improvement in outcome scores between the pre-test and post-test. These results supported evidence that this group spent more viewing time on information outside the body than the 'movement dynamics' group. Image size affected both groups equally with more fixations of shorter duration when viewing the small screen. The results support the benefits of instructions when observing a model with an external focus on movement effects, not dynamics.

  12. RankExplorer: Visualization of Ranking Changes in Large Time Series Data.

    PubMed

    Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin

    2012-12-01

    For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.

  13. Modeling Efficient Serial Visual Search

    DTIC Science & Technology

    2012-08-01

    parafovea size) to explore the parameter space associated with serial search efficiency. Visual search as a paradigm has been studied meticulously for...continues (Over, Hooge , Vlaskamp, & Erkelens, 2007). Over et al. (2007) found that participants initially attended to general properties of the search environ...the efficiency of human serial visual search. There were three parameters that were manipulated in the modeling of the visual search process in this

  14. Age-related changes in conjunctive visual search in children with and without ASD.

    PubMed

    Iarocci, Grace; Armstrong, Kimberly

    2014-04-01

    Visual-spatial strengths observed among people with autism spectrum disorder (ASD) may be associated with increased efficiency of selective attention mechanisms such as visual search. In a series of studies, researchers examined the visual search of targets that share features with distractors in a visual array and concluded that people with ASD showed enhanced performance on visual search tasks. However, methodological limitations, the small sample sizes, and the lack of developmental analysis have tempered the interpretations of these results. In this study, we specifically addressed age-related changes in visual search. We examined conjunctive visual search in groups of children with (n = 34) and without ASD (n = 35) at 7-9 years of age when visual search performance is beginning to improve, and later, at 10-12 years, when performance has improved. The results were consistent with previous developmental findings; 10- to 12-year-old children were significantly faster visual searchers than their 7- to 9-year-old counterparts. However, we found no evidence of enhanced search performance among the children with ASD at either the younger or older ages. More research is needed to understand the development of visual search in both children with and without ASD. © 2014 International Society for Autism Research, Wiley Periodicals, Inc.

  15. Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2011-10-01

    The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Visual Pattern Memory Requires "Foraging" Function in the Central Complex of "Drosophila"

    ERIC Educational Resources Information Center

    Wang, Zhipeng; Pan, Yufeng; Li, Weizhe; Jiang, Huoqing; Chatzimanolis, Lazaros; Chang, Jianhong; Gong, Zhefeng; Liu, Li

    2008-01-01

    The role of the "foraging" ("for)" gene, which encodes a cyclic guanosine-3',5'-monophosphate (cGMP)-dependent protein kinase (PKG), in food-search behavior in "Drosophila" has been intensively studied. However, its functions in other complex behaviors have not been well-characterized. Here, we show experimentally in "Drosophila" that the "for"…

  17. Visual Scan Adaptation During Repeated Visual Search

    DTIC Science & Technology

    2010-01-01

    Junge, J. A. (2004). Searching for stimulus-driven shifts of attention. Psychonomic Bulletin & Review , 11, 876–881. Furst, C. J. (1971...search strategies cannot override attentional capture. Psychonomic Bulletin & Review , 11, 65–70. Wolfe, J. M. (1994). Guided search 2.0: A revised model...of visual search. Psychonomic Bulletin & Review , 1, 202–238. Wolfe, J. M. (1998a). Visual search. In H. Pashler (Ed.), Attention (pp. 13–73). East

  18. B-1 AFT Nacelle Flow Visualization Study

    NASA Technical Reports Server (NTRS)

    Celniker, Robert

    1975-01-01

    A 2-month program was conducted to perform engineering evaluation and design tasks to prepare for visualization and photography of the airflow along the aft portion of the B-1 nacelles and nozzles during flight test. Several methods of visualizing the flow were investigated and compared with respect to cost, impact of the device on the flow patterns, suitability for use in the flight environment, and operability throughout the flight. Data were based on a literature search and discussions with the test personnel. Tufts were selected as the flow visualization device in preference to several other devices studied. A tuft installation pattern has been prepared for the right-hand aft nacelle area of B-1 air vehicle No.2. Flight research programs to develop flow visualization devices other than tufts for use in future testing are recommended. A design study was conducted to select a suitable motion picture camera, to select the camera location, and to prepare engineering drawings sufficient to permit installation of the camera. Ten locations on the air vehicle were evaluated before the selection of the location in the horizontal stabilizer actuator fairing. The considerations included cost, camera angle, available volume, environmental control, flutter impact, and interference with antennas or other instrumentation.

  19. Visualizing a possible atmospheric teleconnection associated with UK floods in autumn 2000

    NASA Astrophysics Data System (ADS)

    Pall, P.; Bensema, K.; Stone, D.; Wehner, M. F.; Bethel, W.; Joy, K.

    2012-12-01

    Severe floods occurred across England and Wales during the record-wet autumn of the year 2000. Recently Pall et al. (2011) demonstrated that the risk of such floods occurring at that time substantially increased as a result of anthropogenic greenhouse gas emissions, and that the synoptic weather system associated with the floods was a common but anomalously strong 'Scandinavia' atmospheric circulation pattern (a Rossby-wave-like train of tropospheric anomalies in geopotential height, extending from the subtropical Atlantic across Eurasia, with a cyclone over the UK and a strong anticyclone over Scandinavia). Blackburn and Hoskins (2001) suggest that this pattern was itself catalyzed by an anomalous upper-tropospheric flow of air: originating with an ascent of air due to convection over warm sea surface temperatures in the western Tropical Pacific, and ending in a descent of air over the Amazon in the proposed source region of the Scandinavia pattern. However, evidence for this so-called 'teleconnection' is not entirely clear in the idealised climate models they used. Here we use visualization techniques to search for this teleconnection in the simulations generated with the more comprehensive seasonal-forecast-resolution climate model of Pall et al. (2011) -- by identifying anomalous streamflow patterns and using the UV-CDAT software developed at Berkeley Lab to do so. Furthermore, since several thousand simulations were generated (in order to capture the rare flood event), totaling hundreds of GB in size, we use paralleisation techniques to perform this search efficiently.

  20. Visual search in divided areas: dividers initially interfere with and later facilitate visual search.

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2013-02-01

    A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.

  1. How task demands influence scanpath similarity in a sequential number-search task.

    PubMed

    Dewhurst, Richard; Foulsham, Tom; Jarodzka, Halszka; Johansson, Roger; Holmqvist, Kenneth; Nyström, Marcus

    2018-06-07

    More and more researchers are considering the omnibus eye movement sequence-the scanpath-in their studies of visual and cognitive processing (e.g. Hayes, Petrov, & Sederberg, 2011; Madsen, Larson, Loschky, & Rebello, 2012; Ni et al., 2011; von der Malsburg & Vasishth, 2011). However, it remains unclear how recent methods for comparing scanpaths perform in experiments producing variable scanpaths, and whether these methods supplement more traditional analyses of individual oculomotor statistics. We address this problem for MultiMatch (Jarodzka et al., 2010; Dewhurst et al., 2012), evaluating its performance with a visual search-like task in which participants must fixate a series of target numbers in a prescribed order. This task should produce predictable sequences of fixations and thus provide a testing ground for scanpath measures. Task difficulty was manipulated by making the targets more or less visible through changes in font and the presence of distractors or visual noise. These changes in task demands led to slower search and more fixations. Importantly, they also resulted in a reduction in the between-subjects scanpath similarity, demonstrating that participants' gaze patterns became more heterogenous in terms of saccade length and angle, and fixation position. This implies a divergent strategy or random component to eye-movement behaviour which increases as the task becomes more difficult. Interestingly, the duration of fixations along aligned vectors showed the opposite pattern, becoming more similar between observers in 2 of the 3 difficulty manipulations. This provides important information for vision scientists who may wish to use scanpath metrics to quantify variations in gaze across a spectrum of perceptual and cognitive tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  3. Beyond mind-reading: multi-voxel pattern analysis of fMRI data.

    PubMed

    Norman, Kenneth A; Polyn, Sean M; Detre, Greg J; Haxby, James V

    2006-09-01

    A key challenge for cognitive neuroscience is determining how mental representations map onto patterns of neural activity. Recently, researchers have started to address this question by applying sophisticated pattern-classification algorithms to distributed (multi-voxel) patterns of functional MRI data, with the goal of decoding the information that is represented in the subject's brain at a particular point in time. This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading. More importantly, MVPA methods constitute a useful new tool for advancing our understanding of neural information processing. We review how researchers are using MVPA methods to characterize neural coding and information processing in domains ranging from visual perception to memory search.

  4. Attention Dysfunction Subtypes of Developmental Dyslexia

    PubMed Central

    Lewandowska, Monika; Milner, Rafał; Ganc, Małgorzata; Włodarczyk, Elżbieta; Skarżyński, Henryk

    2014-01-01

    Background Previous studies indicate that many different aspects of attention are impaired in children diagnosed with developmental dyslexia (DD). The objective of the present study was to identify cognitive profiles of DD on the basis of attentional test performance. Material/Methods 78 children with DD (30 girls, 48 boys, mean age of 12 years ±8 months) and 32 age- and sex-matched non-dyslexic children (14 girls, 18 boys) were examined using a battery of standardized tests of reading, phonological and attentional processes (alertness, covert shift of attention, divided attention, inhibition, flexibility, vigilance, and visual search). Cluster analysis was used to identify subtypes of DD. Results Dyslexic children showed deficits in alertness, covert shift of attention, divided attention, flexibility, and visual search. Three different subtypes of DD were identified, each characterized by poorer performance on the reading, phonological awareness, and visual search tasks. Additionally, children in cluster no. 1 displayed deficits in flexibility and divided attention. In contrast to non-dyslexic children, cluster no. 2 performed poorer in tasks involving alertness, covert shift of attention, divided attention, and vigilance. Cluster no. 3 showed impaired covert shift of attention. Conclusions These results indicate different patterns of attentional impairments in dyslexic children. Remediation programs should address the individual child’s deficit profile. PMID:25387479

  5. Do People Take Stimulus Correlations into Account in Visual Search (Open Source)

    DTIC Science & Technology

    2016-03-10

    RESEARCH ARTICLE Do People Take Stimulus Correlations into Account in Visual Search ? Manisha Bhardwaj1, Ronald van den Berg2,3, Wei Ji Ma2,4...visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often...contribute to bridging the gap between artificial and natural visual search tasks. Introduction Visual target detection in displays consisting of multiple

  6. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  7. Optimal eye movement strategies: a comparison of neurosurgeons gaze patterns when using a surgical microscope.

    PubMed

    Eivazi, Shahram; Hafez, Ahmad; Fuhl, Wolfgang; Afkari, Hoorieh; Kasneci, Enkelejda; Lehecka, Martin; Bednarik, Roman

    2017-06-01

    Previous studies have consistently demonstrated gaze behaviour differences related to expertise during various surgical procedures. In micro-neurosurgery, however, there is a lack of evidence of empirically demonstrated individual differences associated with visual attention. It is unknown exactly how neurosurgeons see a stereoscopic magnified view in the context of micro-neurosurgery and what this implies for medical training. We report on an investigation of the eye movement patterns in micro-neurosurgery using a state-of-the-art eye tracker. We studied the eye movements of nine neurosurgeons while performing cutting and suturing tasks under a surgical microscope. Eye-movement characteristics, such as fixation (focus level) and saccade (visual search pattern), were analysed. The results show a strong relationship between the level of microsurgical skill and the gaze pattern, whereas more expertise is associated with greater eye control, stability, and focusing in eye behaviour. For example, in the cutting task, well-trained surgeons increased their fixation durations on the operating field twice as much as the novices (expert, 848 ms; novice, 402 ms). Maintaining steady visual attention on the target (fixation), as well as being able to quickly make eye jumps from one target to another (saccades) are two important elements for the success of neurosurgery. The captured gaze patterns can be used to improve medical education, as part of an assessment system or in a gaze-training application.

  8. Visual search performance in the autism spectrum II: the radial frequency search task with additional segmentation cues.

    PubMed

    Almeida, Renita A; Dickinson, J Edwin; Maybery, Murray T; Badcock, Johanna C; Badcock, David R

    2010-12-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial frequency (RF) patterns with controllable amounts of target/distracter overlap on which high AQ participants showed more efficient search than low AQ observers. The current study extended the design of this search task by adding two lines which traverse the display on random paths sometimes intersecting target/distracters, other times passing between them. As with the EFT, these lines segment and group the display in ways that are task irrelevant. We tested two new groups of observers and found that while RF search was slowed by the addition of segmenting lines for both groups, the high AQ group retained a consistent search advantage (reflected in a shallower gradient for reaction time as a function of set size) over the low AQ group. Further, the high AQ group were significantly faster and more accurate on the EFT compared to the low AQ group. That is, the results from the present RF search task demonstrate that segmentation and grouping created by intersecting lines does not further differentiate the groups and is therefore unlikely to be a critical factor underlying the EFT performance difference. However, once again, we found that superior EFT performance was associated with shallower gradients on the RF search task. Copyright © 2010 Elsevier Ltd. All rights reserved.

  9. Optimal Achievable Encoding for Brain Machine Interface

    DTIC Science & Technology

    2017-12-22

    dictionary-based encoding approach to translate a visual image into sequential patterns of electrical stimulation in real time , in a manner that...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...networks, and by applying linear decoding to complete recorded populations of retinal ganglion cells for the first time . Third, we developed a greedy

  10. Investigating the Impact of Cognitive Style on Multimedia Learners' Understanding and Visual Search Patterns: An Eye-Tracking Approach

    ERIC Educational Resources Information Center

    Liu, Han-Chin

    2018-01-01

    Multimedia students' dependence on information from the outside world can have an impact on their ability to identify and locate information from multiple resources in learning environments and thereby affect the construction of mental models. Field dependence-independence has been used to assess the ability to extract essential information from…

  11. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  12. A process-based approach to characterizing the effect of acute alprazolam challenge on visual paired associate learning and memory in healthy older adults.

    PubMed

    Pietrzak, Robert H; Scott, James Cobb; Harel, Brian T; Lim, Yen Ying; Snyder, Peter J; Maruff, Paul

    2012-11-01

    Alprazolam is a benzodiazepine that, when administered acutely, results in impairments in several aspects of cognition, including attention, learning, and memory. However, the profile (i.e., component processes) that underlie alprazolam-related decrements in visual paired associate learning has not been fully explored. In this double-blind, placebo-controlled, randomized cross-over study of healthy older adults, we used a novel, "process-based" computerized measure of visual paired associate learning to examine the effect of a single, acute 1-mg dose of alprazolam on component processes of visual paired associate learning and memory. Acute alprazolam challenge was associated with a large magnitude reduction in visual paired associate learning and memory performance (d = 1.05). Process-based analyses revealed significant increases in distractor, exploratory, between-search, and within-search error types. Analyses of percentages of each error type suggested that, relative to placebo, alprazolam challenge resulted in a decrease in the percentage of exploratory errors and an increase in the percentage of distractor errors, both of which reflect memory processes. Results of this study suggest that acute alprazolam challenge decreases visual paired associate learning and memory performance by reducing the strength of the association between pattern and location, which may reflect a general breakdown in memory consolidation, with less evidence of reductions in executive processes (e.g., working memory) that facilitate visual paired associate learning and memory. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Priming cases disturb visual search patterns in screening mammography

    NASA Astrophysics Data System (ADS)

    Lewis, Sarah J.; Reed, Warren M.; Tan, Alvin N. K.; Brennan, Patrick C.; Lee, Warwick; Mello-Thoms, Claudia

    2015-03-01

    Rationale and Objectives: To investigate the effect of inserting obvious cancers into a screening set of mammograms on the visual search of radiologists. Previous research presents conflicting evidence as to the impact of priming in scenarios where prevalence is naturally low, such as in screening mammography. Materials and Methods: An observer performance and eye position analysis study was performed. Four expert breast radiologists were asked to interpret two sets of 40 screening mammograms. The Control Set contained 36 normal and 4 malignant cases (located at case # 9, 14, 25 and 37). The Primed Set contained the same 34 normal and 4 malignant cases (in the same location) plus 2 "primer" malignant cases replacing 2 normal cases (located at positions #20 and 34). Primer cases were defined as lower difficulty cases containing salient malignant features inserted before cases of greater difficulty. Results: Wilcoxon Signed Rank Test indicated no significant differences in sensitivity or specificity between the two sets (P > 0.05). The fixation count in the malignant cases (#25, 37) in the Primed Set after viewing the primer cases (#20, 34) decreased significantly (Z = -2.330, P = 0.020). False-Negatives errors were mostly due to sampling in the Primed Set (75%) in contrast to in the Control Set (25%). Conclusion: The overall performance of radiologists is not affected by the inclusion of obvious cancer cases. However, changes in visual search behavior, as measured by eye-position recording, suggests visual disturbance by the inclusion of priming cases in screening mammography.

  14. Learned suppression for multiple distractors in visual search.

    PubMed

    Won, Bo-Yeong; Geng, Joy J

    2018-05-07

    Visual search for a target object occurs rapidly if there were no distractors to compete for attention, but this rarely happens in real-world environments. Distractors are almost always present and must be suppressed for target selection to succeed. Previous research suggests that one way this occurs is through the creation of a stimulus-specific distractor template. However, it remains unknown how information within such templates scale up with multiple distractors. Here we investigated the informational content of distractor templates created from repeated exposures to multiple distractors. We investigated this question using a visual search task in which participants searched for a gray square among colored squares. During "training," participants always saw the same set of colored distractors. During "testing," new distractor sets were interleaved with the trained distractors. The critical manipulation in each study was the distance (in color space) of the new test distractors from the trained distractors. We hypothesized that the pattern of distractor interference during testing would reveal the tuning of the suppression template: RTs should be commensurate with the degree to which distractor colors are encoded within the suppression template. Results from four experiments converged on the notion that the distractor template includes information about specific color values, but has broad "tuning," allowing suppression to generalize to new distractors. These results suggest that distractor templates, unlike target templates, encode multiple features and have broad representations, which have the advantage of generalizing suppression more easily to other potential distractors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Searching in clutter : visual attention strategies of expert pilots

    DOT National Transportation Integrated Search

    2012-10-22

    Clutter can slow visual search. However, experts may develop attention strategies that alleviate the effects of clutter on search performance. In the current study we examined the effects of global and local clutter on visual search performance and a...

  16. GeNemo: a search engine for web-based functional genomic data.

    PubMed

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  18. [Effect of object consistency in a spatial contextual cueing paradigm].

    PubMed

    Takeda, Yuji

    2008-04-01

    Previous studies demonstrated that attention can be quickly guided to a target location in a visual search task when the spatial configurations of search items and/or the object identities were repeated in the previous trials. This phenomenon is termed contextual cueing. Recently, it was reported that spatial configuration learning and object identity learning occurred independently, when novel contours were used as search items. The present study examined whether this learning occurred independently even when the search items were meaningful. The results showed that the contextual cueing effect was observed even if the relationships between the spatial locations and object identities were jumbled (Experiment 1). However, it disappeared when the search items were changed into geometric patterns (Experiment 2). These results suggest that the spatial configuration can be learned independent of the object identities; however, the use of the learned configuration is restricted by the learning situations.

  19. More efficient rejection of happy than of angry face distractors in visual search.

    PubMed

    Horstmann, Gernot; Scharlau, Ingrid; Ansorge, Ulrich

    2006-12-01

    In the present study, we examined whether the detection advantage for negative-face targets in crowds of positive-face distractors over positive-face targets in crowds of negative faces can be explained by differentially efficient distractor rejection. Search Condition A demonstrated more efficient distractor rejection with negative-face targets in positive-face crowds than vice versa. Search Condition B showed that target identity alone is not sufficient to account for this effect, because there was no difference in processing efficiency for positive- and negative-face targets within neutral crowds. Search Condition C showed differentially efficient processing with neutral-face targets among positive- or negative-face distractors. These results were obtained with both a within-participants (Experiment 1) and a between-participants (Experiment 2) design. The pattern of results is consistent with the assumption that efficient rejection of positive (more homogenous) distractors is an important determinant of performance in search among (face) distractors.

  20. Temporal production and visuospatial processing.

    PubMed

    Benuzzi, Francesca; Basso, Gianpaolo; Nichelli, Paolo

    2005-12-01

    Current models of prospective timing hypothesize that estimated duration is influenced either by the attentional load or by the short-term memory requirements of a concurrent nontemporal task. In the present study, we addressed this issue with four dual-task experiments. In Exp. 1, the effect of memory load on both reaction time and temporal production was proportional to the number of items of a visuospatial pattern to hold in memory. In Exps. 2, 3, and 4, a temporal production task was combined with two visual search tasks involving either pre-attentive or attentional processing. Visual tasks interfered with temporal production: produced intervals were lengthened proportionally to the display size. In contrast, reaction times increased with display size only when a serial, effortful search was required. It appears that memory and perceptual set size, rather than nonspecific attentional or short-term memory load, can influence prospective timing.

  1. Stripes disrupt odour attractiveness to biting horseflies: battle between ammonia, CO₂, and colour pattern for dominance in the sensory systems of host-seeking tabanids.

    PubMed

    Blahó, Miklós; Egri, Adám; Száz, Dénes; Kriska, György; Akesson, Susanne; Horváth, Gábor

    2013-07-02

    As with mosquitoes, female tabanid flies search for mammalian hosts by visual and olfactory cues, because they require a blood meal before being able to produce and lay eggs. Polarotactic tabanid flies find striped or spotted patterns with intensity and/or polarisation modulation visually less attractive than homogeneous white, brown or black targets. Thus, this reduced optical attractiveness to tabanids can be one of the functions of striped or spotty coat patterns in ungulates. Ungulates emit CO2 via their breath, while ammonia originates from their decaying urine. As host-seeking female tabanids are strongly attracted to CO2 and ammonia, the question arises whether the poor visual attractiveness of stripes and spots to tabanids is or is not overcome by olfactory attractiveness. To answer this question we performed two field experiments in which the attractiveness to tabanid flies of homogeneous white, black and black-and-white striped three-dimensional targets (spheres and cylinders) and horse models provided with CO2 and ammonia was studied. Since tabanids are positively polarotactic, i.e. attracted to strongly and linearly polarised light, we measured the reflection-polarisation patterns of the test surfaces and demonstrated that these patterns were practically the same as those of real horses and zebras. We show here that striped targets are significantly less attractive to host-seeking female tabanids than homogeneous white or black targets, even when they emit tabanid-luring CO2 and ammonia. Although CO2 and ammonia increased the number of attracted tabanids, these chemicals did not overcome the weak visual attractiveness of stripes to host-seeking female tabanids. This result demonstrates the visual protection of striped coat patterns against attacks from blood-sucking dipterans, such as horseflies, known to transmit lethal diseases to ungulates. © 2013.

  2. Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury

    PubMed Central

    Schmitter-Edgecombe, Maureen; Robertson, Kayela

    2015-01-01

    Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675

  3. Visual Search Elicits the Electrophysiological Marker of Visual Working Memory

    PubMed Central

    Emrich, Stephen M.; Al-Aidroos, Naseem; Pratt, Jay; Ferber, Susanne

    2009-01-01

    Background Although limited in capacity, visual working memory (VWM) plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA), which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. Methodology/Principal Findings The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. Conclusions/Significance We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors. PMID:19956663

  4. Dorsal and ventral working memory-related brain areas support distinct processes in contextual cueing.

    PubMed

    Manginelli, Angela A; Baumgartner, Florian; Pollmann, Stefan

    2013-02-15

    Behavioral evidence suggests that the use of implicitly learned spatial contexts for improved visual search may depend on visual working memory resources. Working memory may be involved in contextual cueing in different ways: (1) for keeping implicitly learned working memory contents available during search or (2) for the capture of attention by contexts retrieved from memory. We mapped brain areas that were modulated by working memory capacity. Within these areas, activation was modulated by contextual cueing along the descending segment of the intraparietal sulcus, an area that has previously been related to maintenance of explicit memories. Increased activation for learned displays, but not modulated by the size of contextual cueing, was observed in the temporo-parietal junction area, previously associated with the capture of attention by explicitly retrieved memory items, and in the ventral visual cortex. This pattern of activation extends previous research on dorsal versus ventral stream functions in memory guidance of attention to the realm of attentional guidance by implicit memory. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Choosing colors for map display icons using models of visual search.

    PubMed

    Shive, Joshua; Francis, Gregory

    2013-04-01

    We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.

  6. Searching while loaded: Visual working memory does not interfere with hybrid search efficiency but hybrid search uses working memory capacity.

    PubMed

    Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M

    2016-02-01

    In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items.

  7. Visual perceptual load reduces auditory detection in typically developing individuals but not in individuals with autism spectrum disorders.

    PubMed

    Tillmann, Julian; Swettenham, John

    2017-02-01

    Previous studies examining selective attention in individuals with autism spectrum disorder (ASD) have yielded conflicting results, some suggesting superior focused attention (e.g., on visual search tasks), others demonstrating greater distractibility. This pattern could be accounted for by the proposal (derived by applying the Load theory of attention, e.g., Lavie, 2005) that ASD is characterized by an increased perceptual capacity (Remington, Swettenham, Campbell, & Coleman, 2009). Recent studies in the visual domain support this proposal. Here we hypothesize that ASD involves an enhanced perceptual capacity that also operates across sensory modalities, and test this prediction, for the first time using a signal detection paradigm. Seventeen neurotypical (NT) and 15 ASD adolescents performed a visual search task under varying levels of visual perceptual load while simultaneously detecting presence/absence of an auditory tone embedded in noise. Detection sensitivity (d') for the auditory stimulus was similarly high for both groups in the low visual perceptual load condition (e.g., 2 items: p = .391, d = 0.31, 95% confidence interval [CI] [-0.39, 1.00]). However, at a higher level of visual load, auditory d' reduced for the NT group but not the ASD group, leading to a group difference (p = .002, d = 1.2, 95% CI [0.44, 1.96]). As predicted, when visual perceptual load was highest, both groups then showed a similarly low auditory d' (p = .9, d = 0.05, 95% CI [-0.65, 0.74]). These findings demonstrate that increased perceptual capacity in ASD operates across modalities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Visual search and attention: an overview.

    PubMed

    Davis, Elizabeth T; Palmer, John

    2004-01-01

    This special feature issue is devoted to attention and visual search. Attention is a central topic in psychology and visual search is both a versatile paradigm for the study of visual attention and a topic of study in itself. Visual search depends on sensory, perceptual, and cognitive processes. As a result, the search paradigm has been used to investigate a diverse range of phenomena. Manipulating the search task can vary the demands on attention. In turn, attention modulates visual search by selecting and limiting the information available at various levels of processing. Focusing on the intersection of attention and search provides a relatively structured window into the wide world of attentional phenomena. In particular, the effects of divided attention are illustrated by the effects of set size (the number of stimuli in a display) and the effects of selective attention are illustrated by cueing subsets of stimuli within the display. These two phenomena provide the starting point for the articles in this special issue. The articles are organized into four general topics to help structure the issues of attention and search.

  9. Intertrial Temporal Contextual Cuing: Association across Successive Visual Search Trials Guides Spatial Attention

    ERIC Educational Resources Information Center

    Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro

    2005-01-01

    Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…

  10. Visual search deficits in amblyopia.

    PubMed

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2018-04-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.

  11. Visual search in a forced-choice paradigm

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

  12. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  13. Implicit Object Naming in Visual Search: Evidence from Phonological Competition

    PubMed Central

    Walenchok, Stephen C.; Hout, Michael C.; Goldinger, Stephen D.

    2016-01-01

    During visual search, people are distracted by objects that visually resemble search targets; search is impaired when targets and distractors share overlapping features. In this study, we examined whether a nonvisual form of similarity, overlapping object names, can also affect search performance. In three experiments, people searched for images of real-world objects (e.g., a beetle) among items whose names either all shared the same phonological onset (/bi/), or were phonologically varied. Participants either searched for one or three potential targets per trial, with search targets designated either visually or verbally. We examined standard visual search (Experiments 1 and 3) and a self-paced serial search task wherein participants manually rejected each distractor (Experiment 2). We hypothesized that people would maintain visual templates when searching for single targets, but would rely more on object names when searching for multiple items and when targets were verbally cued. This reliance on target names would make performance susceptible to interference from similar-sounding distractors. Experiments 1 and 2 showed the predicted interference effect in conditions with high memory load and verbal cues. In Experiment 3, eye-movement results showed that phonological interference resulted from small increases in dwell time to all distractors. The results suggest that distractor names are implicitly activated during search, slowing attention disengagement when targets and distractors share similar names. PMID:27531018

  14. Insect Detection of Small Targets Moving in Visual Clutter

    PubMed Central

    Barnett, Paul D; O'Carroll, David C

    2006-01-01

    Detection of targets that move within visual clutter is a common task for animals searching for prey or conspecifics, a task made even more difficult when a moving pursuer needs to analyze targets against the motion of background texture (clutter). Despite the limited optical acuity of the compound eye of insects, this challenging task seems to have been solved by their tiny visual system. Here we describe neurons found in the male hoverfly,Eristalis tenax, that respond selectively to small moving targets. Although many of these target neurons are inhibited by the motion of a background pattern, others respond to target motion within the receptive field under a surprisingly large range of background motion stimuli. Some neurons respond whether or not there is a speed differential between target and background. Analysis of responses to very small targets (smaller than the size of the visual field of single photoreceptors) or those targets with reduced contrast shows that these neurons have extraordinarily high contrast sensitivity. Our data suggest that rejection of background motion may result from extreme selectivity for small targets contrasting against local patches of the background, combined with this high sensitivity, such that background patterns rarely contain features that satisfactorily drive the neuron. PMID:16448249

  15. VisSearch: A Collaborative Web Searching Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2005-01-01

    VisSearch is a collaborative Web searching environment intended for sharing Web search results among people with similar interests, such as college students taking the same course. It facilitates students' Web searches by visualizing various Web searching processes. It also collects the visualized Web search results and applies an association rule…

  16. Association and dissociation between detection and discrimination of objects of expertise: Evidence from visual search.

    PubMed

    Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf

    2014-02-01

    Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories.

  17. Searching for unity: Real-world versus item-based visual search in age-related eye disease.

    PubMed

    Crabb, David P; Taylor, Deanna J

    2017-01-01

    When studying visual search, item-based approaches using synthetic targets and distractors limit the real-world applicability of results. Everyday visual search can be impaired in patients with common eye diseases like glaucoma and age-related macular degeneration. We highlight some results in the literature that suggest assessment of real-word search tasks in these patients could be clinically useful.

  18. Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing

    ERIC Educational Resources Information Center

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-01-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…

  19. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    ERIC Educational Resources Information Center

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy

    Purpose: The objective of this study was to assess the complexity of human visual search activity during mammographic screening using fractal analysis and to investigate its relationship with case and reader characteristics. Methods: The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus data for this study. The fractal dimension of the readers’ visual scanning patternsmore » was computed with the Minkowski–Bouligand box-counting method and used as a measure of gaze complexity. Individual factor and group-based interaction ANOVA analysis was performed to study the association between fractal dimension, case pathology, breast density, and reader experience level. The consistency of the observed trends depending on gaze data representation was also examined. Results: Case pathology, breast density, reader experience level, and individual reader differences are all independent predictors of the visual scanning pattern complexity when screening for breast cancer. No higher order effects were found to be significant. Conclusions: Fractal characterization of visual search behavior during mammographic screening is dependent on case properties and image reader characteristics.« less

  1. LoopX: A Graphical User Interface-Based Database for Comprehensive Analysis and Comparative Evaluation of Loops from Protein Structures.

    PubMed

    Kadumuri, Rajashekar Varma; Vadrevu, Ramakrishna

    2017-10-01

    Due to their crucial role in function, folding, and stability, protein loops are being targeted for grafting/designing to create novel or alter existing functionality and improve stability and foldability. With a view to facilitate a thorough analysis and effectual search options for extracting and comparing loops for sequence and structural compatibility, we developed, LoopX a comprehensively compiled library of sequence and conformational features of ∼700,000 loops from protein structures. The database equipped with a graphical user interface is empowered with diverse query tools and search algorithms, with various rendering options to visualize the sequence- and structural-level information along with hydrogen bonding patterns, backbone φ, ψ dihedral angles of both the target and candidate loops. Two new features (i) conservation of the polar/nonpolar environment and (ii) conservation of sequence and conformation of specific residues within the loops have also been incorporated in the search and retrieval of compatible loops for a chosen target loop. Thus, the LoopX server not only serves as a database and visualization tool for sequence and structural analysis of protein loops but also aids in extracting and comparing candidate loops for a given target loop based on user-defined search options.

  2. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  3. Investigating the role of visual and auditory search in reading and developmental dyslexia

    PubMed Central

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014

  4. Investigating the role of visual and auditory search in reading and developmental dyslexia.

    PubMed

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.

  5. The downside of choice: Having a choice benefits enjoyment, but at a cost to efficiency and time in visual search.

    PubMed

    Kunar, Melina A; Ariyabandu, Surani; Jami, Zaffran

    2016-04-01

    The efficiency of how people search for an item in visual search has, traditionally, been thought to depend on bottom-up or top-down guidance cues. However, recent research has shown that the rate at which people visually search through a display is also affected by cognitive strategies. In this study, we investigated the role of choice in visual search, by asking whether giving people a choice alters both preference for a cognitively neutral task and search behavior. Two visual search conditions were examined: one in which participants were given a choice of visual search task (the choice condition), and one in which participants did not have a choice (the no-choice condition). The results showed that the participants in the choice condition rated the task as both more enjoyable and likeable than did the participants in the no-choice condition. However, despite their preferences, actual search performance was slower and less efficient in the choice condition than in the no-choice condition (Exp. 1). Experiment 2 showed that the difference in search performance between the choice and no-choice conditions disappeared when central executive processes became occupied with a task-switching task. These data concur with a choice-impaired hypothesis of search, in which having a choice leads to more motivated, active search involving executive processes.

  6. Search for Patterns of Functional Specificity in the Brain: A Nonparametric Hierarchical Bayesian Model for Group fMRI Data

    PubMed Central

    Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina

    2012-01-01

    Functional MRI studies have uncovered a number of brain areas that demonstrate highly specific functional patterns. In the case of visual object recognition, small, focal regions have been characterized with selectivity for visual categories such as human faces. In this paper, we develop an algorithm that automatically learns patterns of functional specificity from fMRI data in a group of subjects. The method does not require spatial alignment of functional images from different subjects. The algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to learn the patterns of functional specificity shared across the group, which we call functional systems, and estimate the number of these systems. Inference based on our model enables automatic discovery and characterization of dominant and consistent functional systems. We apply the method to data from a visual fMRI study comprised of 69 distinct stimulus images. The discovered system activation profiles correspond to selectivity for a number of image categories such as faces, bodies, and scenes. Among systems found by our method, we identify new areas that are deactivated by face stimuli. In empirical comparisons with perviously proposed exploratory methods, our results appear superior in capturing the structure in the space of visual categories of stimuli. PMID:21884803

  7. Dementia alters standing postural adaptation during a visual search task in older adult men.

    PubMed

    Jor'dan, Azizah J; McCarten, J Riley; Rottunda, Susan; Stoffregen, Thomas A; Manor, Brad; Wade, Michael G

    2015-04-23

    This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance--in the non-dementia group only--suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus, appears to disrupt this perception-action synergy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Impaired visual search in rats reveals cholinergic contributions to feature binding in visuospatial attention.

    PubMed

    Botly, Leigh C P; De Rosa, Eve

    2012-10-01

    The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.

  9. Hiding and finding: the relationship between visual concealment and visual search.

    PubMed

    Smilek, Daniel; Weinheimer, Laura; Kwan, Donna; Reynolds, Mike; Kingstone, Alan

    2009-11-01

    As an initial step toward developing a theory of visual concealment, we assessed whether people would use factors known to influence visual search difficulty when the degree of concealment of objects among distractors was varied. In Experiment 1, participants arranged search objects (shapes, emotional faces, and graphemes) to create displays in which the targets were in plain sight but were either easy or hard to find. Analyses of easy and hard displays created during Experiment 1 revealed that the participants reliably used factors known to influence search difficulty (e.g., eccentricity, target-distractor similarity, presence/absence of a feature) to vary the difficulty of search across displays. In Experiment 2, a new participant group searched for the targets in the displays created by the participants in Experiment 1. Results indicated that search was more difficult in the hard than in the easy condition. In Experiments 3 and 4, participants used presence versus absence of a feature to vary search difficulty with several novel stimulus sets. Taken together, the results reveal a close link between the factors that govern concealment and the factors known to influence search difficulty, suggesting that a visual search theory can be extended to form the basis of a theory of visual concealment.

  10. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology

    PubMed Central

    Maekawa, Toru; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual’s emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual’s perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings. PMID:29664952

  11. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology.

    PubMed

    Maekawa, Toru; Anderson, Stephen J; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual's emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual's perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings.

  12. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  13. The effects of visual search efficiency on object-based attention

    PubMed Central

    Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene

    2017-01-01

    The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41–51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161–177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention. PMID:25832192

  14. Visual search asymmetries within color-coded and intensity-coded displays.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  15. Top-down contextual knowledge guides visual attention in infancy.

    PubMed

    Tummeltshammer, Kristen; Amso, Dima

    2017-10-26

    The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.

  16. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness.

  17. Searching for differences in race: is there evidence for preferential detection of other-race faces?

    PubMed

    Lipp, Ottmar V; Terry, Deborah J; Smith, Joanne R; Tellegen, Cassandra L; Kuebbeler, Jennifer; Newey, Mareka

    2009-06-01

    Previous research has suggested that like animal and social fear-relevant stimuli, other-race faces (African American) are detected preferentially in visual search. Three experiments using Chinese or Indonesian faces as other-race faces yielded the opposite pattern of results: faster detection of same-race faces among other-race faces. This apparently inconsistent pattern of results was resolved by showing that Asian and African American faces are detected preferentially in tasks that have small stimulus sets and employ fixed target searches. Asian and African American other-race faces are found slower among Caucasian face backgrounds if larger stimulus sets are used in tasks with a variable mapping of stimulus to background or target. Thus, preferential detection of other-race faces was not found under task conditions in which preferential detection of animal and social fear-relevant stimuli is evident. Although consistent with the view that same-race faces are processed in more detail than other-race faces, the current findings suggest that other-race faces do not draw attention preferentially.

  18. Interrupted Visual Searches Reveal Volatile Search Memory

    ERIC Educational Resources Information Center

    Shen, Y. Jeremy; Jiang, Yuhong V.

    2006-01-01

    This study investigated memory from interrupted visual searches. Participants conducted a change detection search task on polygons overlaid on scenes. Search was interrupted by various disruptions, including unfilled delay, passive viewing of other scenes, and additional search on new displays. Results showed that performance was unaffected by…

  19. Eye movements and attention in reading, scene perception, and visual search.

    PubMed

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  20. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Performance in a Visual Search Task Uniquely Predicts Reading Abilities in Third-Grade Hong Kong Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Chung, Kevin K. H.

    2015-01-01

    This study examined the relation between the performance in a visual search task and reading ability in 92 third-grade Hong Kong Chinese children. The visual search task, which is considered a measure of visual-spatial attention, accounted for unique variance in Chinese character reading after controlling for age, nonverbal intelligence,…

  2. What Kind of Memory Supports Visual Marking?

    ERIC Educational Resources Information Center

    Jiang, Yuhong; Wang, Stephanie W.

    2004-01-01

    In visual search tasks, if a set of items is presented for 1 s before another set of new items (containing the target) is added, search can be restricted to the new set. The process that eliminates old items from search is visual marking. This study investigates the kind of memory that distinguishes the old items from the new items during search.…

  3. Influence of social presence on eye movements in visual search tasks.

    PubMed

    Liu, Na; Yu, Ruifeng

    2017-12-01

    This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.

  4. Does constraining memory maintenance reduce visual search efficiency?

    PubMed

    Buttaccio, Daniel R; Lange, Nicholas D; Thomas, Rick P; Dougherty, Michael R

    2018-03-01

    We examine whether constraining memory retrieval processes affects performance in a cued recall visual search task. In the visual search task, participants are first presented with a memory prompt followed by a search array. The memory prompt provides diagnostic information regarding a critical aspect of the target (its colour). We assume that upon the presentation of the memory prompt, participants retrieve and maintain hypotheses (i.e., potential target characteristics) in working memory in order to improve their search efficiency. By constraining retrieval through the manipulation of time pressure (Experiments 1A and 1B) or a concurrent working memory task (Experiments 2A, 2B, and 2C), we directly test the involvement of working memory in visual search. We find some evidence that visual search is less efficient under conditions in which participants were likely to be maintaining fewer hypotheses in working memory (Experiments 1A, 2A, and 2C), suggesting that the retrieval of representations from long-term memory into working memory can improve visual search. However, these results should be interpreted with caution, as the data from two experiments (Experiments 1B and 2B) did not lend support for this conclusion.

  5. PANTHER. Pattern ANalytics To support High-performance Exploitation and Reasoning.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czuchlewski, Kristina Rodriguez; Hart, William E.

    Sandia has approached the analysis of big datasets with an integrated methodology that uses computer science, image processing, and human factors to exploit critical patterns and relationships in large datasets despite the variety and rapidity of information. The work is part of a three-year LDRD Grand Challenge called PANTHER (Pattern ANalytics To support High-performance Exploitation and Reasoning). To maximize data analysis capability, Sandia pursued scientific advances across three key technical domains: (1) geospatial-temporal feature extraction via image segmentation and classification; (2) geospatial-temporal analysis capabilities tailored to identify and process new signatures more efficiently; and (3) domain- relevant models of humanmore » perception and cognition informing the design of analytic systems. Our integrated results include advances in geographical information systems (GIS) in which we discover activity patterns in noisy, spatial-temporal datasets using geospatial-temporal semantic graphs. We employed computational geometry and machine learning to allow us to extract and predict spatial-temporal patterns and outliers from large aircraft and maritime trajectory datasets. We automatically extracted static and ephemeral features from real, noisy synthetic aperture radar imagery for ingestion into a geospatial-temporal semantic graph. We worked with analysts and investigated analytic workflows to (1) determine how experiential knowledge evolves and is deployed in high-demand, high-throughput visual search workflows, and (2) better understand visual search performance and attention. Through PANTHER, Sandia's fundamental rethinking of key aspects of geospatial data analysis permits the extraction of much richer information from large amounts of data. The project results enable analysts to examine mountains of historical and current data that would otherwise go untouched, while also gaining meaningful, measurable, and defensible insights into overlooked relationships and patterns. The capability is directly relevant to the nation's nonproliferation remote-sensing activities and has broad national security applications for military and intelligence- gathering organizations.« less

  6. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.

  7. The influence of visual ability on learning and memory performance in 13 strains of mice.

    PubMed

    Brown, Richard E; Wong, Aimée A

    2007-03-01

    We calculated visual ability in 13 strains of mice (129SI/Sv1mJ, A/J, AKR/J, BALB/cByJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, MOLF/EiJ, SJL/J, SM/J, and SPRET/EiJ) on visual detection, pattern discrimination, and visual acuity and tested these and other mice of the same strains in a behavioral test battery that evaluated visuo-spatial learning and memory, conditioned odor preference, and motor learning. Strain differences in visual acuity accounted for a significant proportion of the variance between strains in measures of learning and memory in the Morris water maze. Strain differences in motor learning performance were not influenced by visual ability. Conditioned odor preference was enhanced in mice with visual defects. These results indicate that visual ability must be accounted for when testing for strain differences in learning and memory in mice because differences in performance in many tasks may be due to visual deficits rather than differences in higher order cognitive functions. These results have significant implications for the search for the neural and genetic basis of learning and memory in mice.

  8. Search time critically depends on irrelevant subset size in visual search.

    PubMed

    Benjamins, Jeroen S; Hooge, Ignace T C; van Elst, Jacco C; Wertheim, Alexander H; Verstraten, Frans A J

    2009-02-01

    In order for our visual system to deal with the massive amount of sensory input, some of this input is discarded, while other parts are processed [Wolfe, J. M. (1994). Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238]. From the visual search literature it is unclear how well one set of items can be selected that differs in only one feature from target (a 1F set), while another set of items can be ignored that differs in two features from target (a 2F set). We systematically varied the percentage of 2F non-targets to determine the contribution of these non-targets to search behaviour. Increasing the percentage 2F non-targets, that have to be ignored, was expected to result in increasingly faster search, since it decreases the size of 1F set that has to be searched. Observers searched large displays for a target in the 1F set with a variable percentage of 2F non-targets. Interestingly, when the search displays contained 5% 2F non-targets, the search time was longer compared to the search time in other conditions. This effect of 2F non-targets on performance was independent of set size. An inspection of the saccades revealed that saccade target selection did not contribute to the longer search times in displays with 5% 2F non-targets. Occurrence of longer search times in displays containing 5% 2F non-targets might be attributed to covert processes related to visual analysis of the fixated part of the display. Apparently, visual search performance critically depends on the percentage of irrelevant 2F non-targets.

  9. The influence of clutter on real-world scene search: evidence from search efficiency and eye movements.

    PubMed

    Henderson, John M; Chanceaux, Myriam; Smith, Tim J

    2009-01-23

    We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.

  10. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    NASA Astrophysics Data System (ADS)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  11. Making Temporal Search More Central in Spatial Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Corti, P.; Lewis, B.

    2017-10-01

    A temporally enabled Spatial Data Infrastructure (SDI) is a framework of geospatial data, metadata, users, and tools intended to provide an efficient and flexible way to use spatial information which includes the historical dimension. One of the key software components of an SDI is the catalogue service which is needed to discover, query, and manage the metadata. A search engine is a software system capable of supporting fast and reliable search, which may use any means necessary to get users to the resources they need quickly and efficiently. These techniques may include features such as full text search, natural language processing, weighted results, temporal search based on enrichment, visualization of patterns in distributions of results in time and space using temporal and spatial faceting, and many others. In this paper we will focus on the temporal aspects of search which include temporal enrichment using a time miner - a software engine able to search for date components within a larger block of text, the storage of time ranges in the search engine, handling historical dates, and the use of temporal histograms in the user interface to display the temporal distribution of search results.

  12. Impact of Glaucoma and Dry Eye on Text-Based Searching.

    PubMed

    Sun, Michelle J; Rubin, Gary S; Akpek, Esen K; Ramulu, Pradeep Y

    2017-06-01

    We determine if visual field loss from glaucoma and/or measures of dry eye severity are associated with difficulty searching, as judged by slower search times on a text-based search task. Glaucoma patients with bilateral visual field (VF) loss, patients with clinically significant dry eye, and normally-sighted controls were enrolled from the Wilmer Eye Institute clinics. Subjects searched three Yellow Pages excerpts for a specific phone number, and search time was recorded. A total of 50 glaucoma subjects, 40 dry eye subjects, and 45 controls completed study procedures. On average, glaucoma patients exhibited 57% longer search times compared to controls (95% confidence interval [CI], 26%-96%, P < 0.001), and longer search times were noted among subjects with greater VF loss ( P < 0.001), worse contrast sensitivity ( P < 0.001), and worse visual acuity ( P = 0.026). Dry eye subjects demonstrated similar search times compared to controls, though worse Ocular Surface Disease Index (OSDI) vision-related subscores were associated with longer search times ( P < 0.01). Search times showed no association with OSDI symptom subscores ( P = 0.20) or objective measures of dry eye ( P > 0.08 for Schirmer's testing without anesthesia, corneal fluorescein staining, and tear film breakup time). Text-based visual search is slower for glaucoma patients with greater levels of VF loss and dry eye patients with greater self-reported visual difficulty, and these difficulties may contribute to decreased quality of life in these groups. Visual search is impaired in glaucoma and dry eye groups compared to controls, highlighting the need for compensatory strategies and tools to assist individuals in overcoming their deficiencies.

  13. Repetition Is the Feature Behind the Attentional Bias for Recognizing Threatening Patterns.

    PubMed

    Shabbir, Maryam; Zon, Adelynn M Y; Thuppil, Vivek

    2018-01-01

    Animals attend to what is relevant in order to behave in an effective manner and succeed in their environments. In several nonhuman species, there is an evolved bias for attending to patterns indicative of threats in the natural environment such as dangerous animals. Because skins of many dangerous animals are typically repetitive, we propose that repetition is the key feature enabling recognition of evolutionarily important threats. The current study consists of two experiments where we measured participants' reactions to pictures of male and female models wearing clothing of various repeating (leopard skin, snakeskin, and floral print) and nonrepeating (camouflage, shiny, and plain) patterns. In Experiment 1, when models wearing patterns were presented side by side with total fixation duration as the measure, the repeating floral pattern was the most provocative, with total fixation duration significantly longer than all other patterns. Leopard and snakeskin patterns had total fixation durations that were significantly longer than the plain pattern. In Experiment 2, we employed a visual-search task where participants were required to find models wearing the various patterns in a setting of a crowded airport terminal. Participants detected leopard skin pattern and repetitive floral pattern significantly faster than two of the nonpatterned clothing styles. Our experimental findings support the hypothesis that repetition of specific visual features might facilitate target detection, especially those characterizing evolutionary important threats. Our findings that intricate, but nonthreatening repeating patterns can have similar attention-grabbing properties to animal skin patterns have important implications for the fashion industry and wildlife trade.

  14. Adaptation of video game UVW mapping to 3D visualization of gene expression patterns

    NASA Astrophysics Data System (ADS)

    Vize, Peter D.; Gerth, Victor E.

    2007-01-01

    Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.

  15. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    PubMed Central

    2010-01-01

    Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available at http://rnafrabase.cs.put.poznan.pl. Conclusions RNA FRABASE 2.0 provides a novel database and powerful search engine which is equipped with new data and functionalities that are unavailable elsewhere. Our intention is that this advanced version of the RNA FRABASE will be of interest to all researchers working in the RNA field. PMID:20459631

  16. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures.

    PubMed

    Popenda, Mariusz; Szachniuk, Marta; Blazewicz, Marek; Wasik, Szymon; Burke, Edmund K; Blazewicz, Jacek; Adamiak, Ryszard W

    2010-05-06

    Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available at http://rnafrabase.cs.put.poznan.pl. RNA FRABASE 2.0 provides a novel database and powerful search engine which is equipped with new data and functionalities that are unavailable elsewhere. Our intention is that this advanced version of the RNA FRABASE will be of interest to all researchers working in the RNA field.

  17. Guidance of visual search by memory and knowledge.

    PubMed

    Hollingworth, Andrew

    2012-01-01

    To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.

  18. Survival Processing Enhances Visual Search Efficiency.

    PubMed

    Cho, Kit W

    2018-05-01

    Words rated for their survival relevance are remembered better than when rated using other well-known memory mnemonics. This finding, which is known as the survival advantage effect and has been replicated in many studies, suggests that our memory systems are molded by natural selection pressures. In two experiments, the present study used a visual search task to examine whether there is likewise a survival advantage for our visual systems. Participants rated words for their survival relevance or for their pleasantness before locating that object's picture in a search array with 8 or 16 objects. Although there was no difference in search times among the two rating scenarios when set size was 8, survival processing reduced visual search times when set size was 16. These findings reflect a search efficiency effect and suggest that similar to our memory systems, our visual systems are also tuned toward self-preservation.

  19. Designing a Visual Interface for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia

    1999-01-01

    "MedLine Search Assistant" is a new interface for MEDLINE searching that improves both search precision and recall by helping the user convert a free text search to a controlled vocabulary-based search in a visual environment. Features of the interface are described, followed by details of the conceptual design and the physical design of…

  20. Competing Distractors Facilitate Visual Search in Heterogeneous Displays.

    PubMed

    Kong, Garry; Alais, David; Van der Burg, Erik

    2016-01-01

    In the present study, we examine how observers search among complex displays. Participants were asked to search for a big red horizontal line among 119 distractor lines of various sizes, orientations and colours, leading to 36 different feature combinations. To understand how people search in such a heterogeneous display, we evolved the search display by using a genetic algorithm (Experiment 1). The best displays (i.e., displays corresponding to the fastest reaction times) were selected and combined to create new, evolved displays. Search times declined over generations. Results show that items sharing the same colour and orientation as the target disappeared over generations, implying they interfered with search, but items sharing the same colour and were 12.5° different in orientation only interfered if they were also the same size. Furthermore, and inconsistent with most dominant visual search theories, we found that non-red horizontal distractors increased over generations, indicating that these distractors facilitated visual search while participants were searching for a big red horizontally oriented target. In Experiments 2 and 3, we replicated these results using conventional, factorial experiments. Interestingly, in Experiment 4, we found that this facilitation effect was only present when the displays were very heterogeneous. While current models of visual search are able to successfully describe search in homogeneous displays, our results challenge the ability of these models to describe visual search in heterogeneous environments.

  1. Threat captures attention but does not affect learning of contextual regularities.

    PubMed

    Yamaguchi, Motonori; Harwood, Sarah L

    2017-04-01

    Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.

  2. Looking sharp: Becoming a search template boosts precision and stability in visual working memory.

    PubMed

    Rajsic, Jason; Ouslis, Natasha E; Wilson, Daryl E; Pratt, Jay

    2017-08-01

    Visual working memory (VWM) plays a central role in visual cognition, and current work suggests that there is a special state in VWM for items that are the goal of visual searches. However, whether the quality of memory for target templates differs from memory for other items in VWM is currently unknown. In this study, we measured the precision and stability of memory for search templates and accessory items to determine whether search templates receive representational priority in VWM. Memory for search templates exhibited increased precision and probability of recall, whereas accessory items were remembered less often. Additionally, while memory for Templates showed benefits when instances of the Template appeared in search, this benefit was not consistently observed for Accessory items when they appeared in search. Our results show that becoming a search template can substantially affect the quality of a representation in VWM.

  3. Aging and feature search: the effect of search area.

    PubMed

    Burton-Danner, K; Owsley, C; Jackson, G R

    2001-01-01

    The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.

  4. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  5. When Do Individuals with Autism Spectrum Disorder Show Superiority in Visual Search?

    ERIC Educational Resources Information Center

    Shirama, Aya; Kato, Nobumasa; Kashino, Makio

    2017-01-01

    Although superior visual search skills have been repeatedly reported for individuals with autism spectrum disorder, the underlying mechanisms remain controversial. To specify the locus where individuals with autism spectrum disorder excel in visual search, we compared the performance of autism spectrum disorder adults and healthy controls in…

  6. Simulating the role of visual selective attention during the development of perceptual completion

    PubMed Central

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2014-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. PMID:23106728

  7. Simulating the role of visual selective attention during the development of perceptual completion.

    PubMed

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P

    2012-11-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.

  8. A main path domain map as digital library interface

    NASA Astrophysics Data System (ADS)

    Demaine, Jeffrey

    2009-01-01

    The shift to electronic publishing of scientific journals is an opportunity for the digital library to provide non-traditional ways of accessing the literature. One method is to use citation metadata drawn from a collection of electronic journals to generate maps of science. These maps visualize the communication patterns in the collection, giving the user an easy-tograsp view of the semantic structure underlying the scientific literature. For this visualization to be understandable the complexity of the citation network must be reduced through an algorithm. This paper describes the Citation Pathfinder application and its integration into a prototype digital library. This application generates small-scale citation networks that expand upon the search results of the digital library. These domain maps are linked to the collection, creating an interface that is based on the communication patterns in science. The Main Path Analysis technique is employed to simplify these networks into linear, sequential structures. By identifying patterns that characterize the evolution of the research field, Citation Pathfinder uses citations to give users a deeper understanding of the scientific literature.

  9. Establishing the behavioural limits for countershaded camouflage.

    PubMed

    Penacchio, Olivier; Harris, Julie M; Lovell, P George

    2017-10-20

    Countershading is a ubiquitous patterning of animals whereby the side that typically faces the highest illumination is darker. When tuned to specific lighting conditions and body orientation with respect to the light field, countershading minimizes the gradient of light the body reflects by counterbalancing shadowing due to illumination, and has therefore classically been thought of as an adaptation for visual camouflage. However, whether and how crypsis degrades when body orientation with respect to the light field is non-optimal has never been studied. We tested the behavioural limits on body orientation for countershading to deliver effective visual camouflage. We asked human participants to detect a countershaded target in a simulated three-dimensional environment. The target was optimally coloured for crypsis in a reference orientation and was displayed at different orientations. Search performance dramatically improved for deviations beyond 15 degrees. Detection time was significantly shorter and accuracy significantly higher than when the target orientation matched the countershading pattern. This work demonstrates the importance of maintaining body orientation appropriate for the displayed camouflage pattern, suggesting a possible selective pressure for animals to orient themselves appropriately to enhance crypsis.

  10. Attention capture by eye of origin singletons even without awareness--a hallmark of a bottom-up saliency map in the primary visual cortex.

    PubMed

    Zhaoping, Li

    2008-05-07

    Human observers are typically unaware of the eye of origin of visual inputs. This study shows that an eye of origin or ocular singleton, e.g., an item in the left eye among background items in the right eye, can nevertheless attract attention automatically. Observers searched for a uniquely oriented bar, i.e., an orientation singleton, in a background of horizontal bars. Their reports of the tilt direction of the search target in a brief (200 ms) display were more accurate in a dichoptic congruent (DC) condition, when the target was also an ocular singleton, than in a monocular (M) condition, when all bars were presented to the same single eye, or a dichoptic incongruent (DI) condition, when an ocular singleton was a background bar. The better performance in DC did not depend on the ability of the observers to report the presence of an ocular singleton by making forced choices in the same stimuli (though without the orientation singleton). This suggests that the ocular singleton exogenously cued attention to its location, facilitating the identification of the tilt singleton in the DC condition. When the search display persisted without being masked, observers' reaction times (RTs) for reporting the location of the search target were shorter in the DC, and longer in the DI, than the M condition, regardless of whether the observers were aware that different conditions existed. In an analogous design, similar RT patterns were observed for the task of finding an orientation contrast texture border. These results suggest that in typical trials, attention was more quickly attracted to or initially distracted from the target in the DC or DI condition, respectively. Hence, an ocular singleton, though elusive to awareness, can effectively compete for attention with an orientation singleton (tilted 20 or 50 degrees from background bars in the current study). Similarly, it can also make a difficult visual search easier by diminishing the set size effect. Since monocular neurons with the eye of origin information are abundant in the primary visual cortex (V1) and scarce in other cortical areas, and since visual awareness is believed to be absent or weaker in V1 than in other cortical areas, our results provide a hallmark of the role of V1 in creating a bottom-up saliency map to guide attentional selection.

  11. Prospects and limitations of citizen science in invasive species management: A case study with Burmese pythons in Everglades National Park

    USGS Publications Warehouse

    Falk, Bryan; Snow, Raymond W.; Reed, Robert

    2016-01-01

    Citizen-science programs have the potential to contribute to the management of invasive species, including Python molurus bivittatus (Burmese Python) in Florida. We characterized citizen-science–generated Burmese Python information from Everglades National Park (ENP) to explore how citizen science may be useful in this effort. As an initial step, we compiled and summarized records of Burmese Python observations and removals collected by both professional and citizen scientists in ENP during 2000–2014 and found many patterns of possible significance, including changes in annual observations and in demographic composition after a cold event. These patterns are difficult to confidently interpret because the records lack search-effort information, however, and differences among years may result from differences in search effort. We began collecting search-effort information in 2014 by leveraging an ongoing citizen-science program in ENP. Program participation was generally low, with most authorized participants in 2014 not searching for the snakes at all. We discuss the possible explanations for low participation, especially how the low likelihood of observing pythons weakens incentives to search. The monthly rate of Burmese Python observations for 2014 averaged ~1 observation for every 8 h of searching, but during several months, the rate was 1 python per >40 h of searching. These low observation-rates are a natural outcome of the snakes’ low detectability—few Burmese Pythons are likely to be observed even if many are present. The general inaccessibility of the southern Florida landscape also severely limits the effectiveness of using visual searches to find and remove pythons for the purposes of population control. Instead, and despite the difficulties in incentivizing voluntary participation, the value of citizen-science efforts in the management of the Burmese Python population is in collecting search-effort information.

  12. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  13. Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?

    ERIC Educational Resources Information Center

    Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.

    2005-01-01

    Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…

  14. Frontal–Occipital Connectivity During Visual Search

    PubMed Central

    Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas

    2012-01-01

    Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993

  15. Eye guidance during real-world scene search: The role color plays in central and peripheral vision.

    PubMed

    Nuthmann, Antje; Malcolm, George L

    2016-01-01

    The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.

  16. Impact of Glaucoma and Dry Eye on Text-Based Searching

    PubMed Central

    Sun, Michelle J.; Rubin, Gary S.; Akpek, Esen K.; Ramulu, Pradeep Y.

    2017-01-01

    Purpose We determine if visual field loss from glaucoma and/or measures of dry eye severity are associated with difficulty searching, as judged by slower search times on a text-based search task. Methods Glaucoma patients with bilateral visual field (VF) loss, patients with clinically significant dry eye, and normally-sighted controls were enrolled from the Wilmer Eye Institute clinics. Subjects searched three Yellow Pages excerpts for a specific phone number, and search time was recorded. Results A total of 50 glaucoma subjects, 40 dry eye subjects, and 45 controls completed study procedures. On average, glaucoma patients exhibited 57% longer search times compared to controls (95% confidence interval [CI], 26%–96%, P < 0.001), and longer search times were noted among subjects with greater VF loss (P < 0.001), worse contrast sensitivity (P < 0.001), and worse visual acuity (P = 0.026). Dry eye subjects demonstrated similar search times compared to controls, though worse Ocular Surface Disease Index (OSDI) vision-related subscores were associated with longer search times (P < 0.01). Search times showed no association with OSDI symptom subscores (P = 0.20) or objective measures of dry eye (P > 0.08 for Schirmer's testing without anesthesia, corneal fluorescein staining, and tear film breakup time). Conclusions Text-based visual search is slower for glaucoma patients with greater levels of VF loss and dry eye patients with greater self-reported visual difficulty, and these difficulties may contribute to decreased quality of life in these groups. Translational Relevance Visual search is impaired in glaucoma and dry eye groups compared to controls, highlighting the need for compensatory strategies and tools to assist individuals in overcoming their deficiencies. PMID:28670502

  17. Improving visual search in instruction manuals using pictograms.

    PubMed

    Kovačević, Dorotea; Brozović, Maja; Možina, Klementina

    2016-11-01

    Instruction manuals provide important messages about the proper use of a product. They should communicate in such a way that they facilitate users' searches for specific information. Despite the increasing research interest in visual search, there is a lack of empirical knowledge concerning the role of pictograms in search performance during the browsing of a manual's pages. This study investigates how the inclusion of pictograms improves the search for the target information. Furthermore, it examines whether this search process is influenced by the visual similarity between the pictograms and the searched for information. On the basis of eye-tracking measurements, as objective indicators of the participants' visual attention, it was found that pictograms can be a useful element of search strategy. Another interesting finding was that boldface highlighting is a more effective method for improving user experience in information seeking, rather than the similarity between the pictorial and adjacent textual information. Implications for designing effective user manuals are discussed. Practitioner Summary: Users often view instruction manuals with the aim of finding specific information. We used eye-tracking technology to examine different manual pages in order to improve the user's visual search for target information. The results indicate that the use of pictograms and bold highlighting of relevant information facilitate the search process.

  18. Age-Related Differences in Vehicle Control and Eye Movement Patterns at Intersections: Older and Middle-Aged Drivers

    PubMed Central

    Yamani, Yusuke; Horrey, William J.; Liang, Yulan; Fisher, Donald L.

    2016-01-01

    Older drivers are at increased risk of intersection crashes. Previous work found that older drivers execute less frequent glances for detecting potential threats at intersections than middle-aged drivers. Yet, earlier work has also shown that an active training program doubled the frequency of these glances among older drivers, suggesting that these effects are not necessarily due to age-related functional declines. In light of findings, the current study sought to explore the ability of older drivers to coordinate their head and eye movements while simultaneously steering the vehicle as well as their glance behavior at intersections. In a driving simulator, older (M = 76 yrs) and middle-aged (M = 58 yrs) drivers completed different driving tasks: (1) travelling straight on a highway while scanning for peripheral information (a visual search task) and (2) navigating intersections with areas potential hazard. The results replicate that the older drivers did not execute glances for potential threats to the sides when turning at intersections as frequently as the middle-aged drivers. Furthermore, the results demonstrate costs of performing two concurrent tasks, highway driving and visual search task on the side displays: the older drivers performed more poorly on the visual search task and needed to correct their steering positions more compared to the middle-aged counterparts. The findings are consistent with the predictions and discussed in terms of a decoupling hypothesis, providing an account for the effects of the active training program. PMID:27736887

  19. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  20. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  1. Fractal fluctuations in gaze speed visual search.

    PubMed

    Stephen, Damian G; Anastas, Jason

    2011-04-01

    Visual search involves a subtle coordination of visual memory and lower-order perceptual mechanisms. Specifically, the fluctuations in gaze may provide support for visual search above and beyond what may be attributed to memory. Prior research indicates that gaze during search exhibits fractal fluctuations, which allow for a wide sampling of the field of view. Fractal fluctuations constitute a case of fast diffusion that may provide an advantage in exploration. We present reanalyses of eye-tracking data collected by Stephen and Mirman (Cognition, 115, 154-165, 2010) for single-feature and conjunction search tasks. Fluctuations in gaze during these search tasks were indeed fractal. Furthermore, the degree of fractality predicted decreases in reaction time on a trial-by-trial basis. We propose that fractality may play a key role in explaining the efficacy of perceptual exploration.

  2. Prediction of shot success for basketball free throws: visual search strategy.

    PubMed

    Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

    2014-01-01

    In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed.

  3. Hyperspace geography: visualizing fitness landscapes beyond 4D.

    PubMed

    Wiles, Janet; Tonkes, Bradley

    2006-01-01

    Human perception is finely tuned to extract structure about the 4D world of time and space as well as properties such as color and texture. Developing intuitions about spatial structure beyond 4D requires exploiting other perceptual and cognitive abilities. One of the most natural ways to explore complex spaces is for a user to actively navigate through them, using local explorations and global summaries to develop intuitions about structure, and then testing the developing ideas by further exploration. This article provides a brief overview of a technique for visualizing surfaces defined over moderate-dimensional binary spaces, by recursively unfolding them onto a 2D hypergraph. We briefly summarize the uses of a freely available Web-based visualization tool, Hyperspace Graph Paper (HSGP), for exploring fitness landscapes and search algorithms in evolutionary computation. HSGP provides a way for a user to actively explore a landscape, from simple tasks such as mapping the neighborhood structure of different points, to seeing global properties such as the size and distribution of basins of attraction or how different search algorithms interact with landscape structure. It has been most useful for exploring recursive and repetitive landscapes, and its strength is that it allows intuitions to be developed through active navigation by the user, and exploits the visual system's ability to detect pattern and texture. The technique is most effective when applied to continuous functions over Boolean variables using 4 to 16 dimensions.

  4. Disease Monitoring and Health Campaign Evaluation Using Google Search Activities for HIV and AIDS, Stroke, Colorectal Cancer, and Marijuana Use in Canada: A Retrospective Observational Study.

    PubMed

    Ling, Rebecca; Lee, Joon

    2016-10-12

    Infodemiology can offer practical and feasible health research applications through the practice of studying information available on the Web. Google Trends provides publicly accessible information regarding search behaviors in a population, which may be studied and used for health campaign evaluation and disease monitoring. Additional studies examining the use and effectiveness of Google Trends for these purposes remain warranted. The objective of our study was to explore the use of infodemiology in the context of health campaign evaluation and chronic disease monitoring. It was hypothesized that following a launch of a campaign, there would be an increase in information seeking behavior on the Web. Second, increasing and decreasing disease patterns in a population would be associated with search activity patterns. This study examined 4 different diseases: human immunodeficiency virus (HIV) infection, stroke, colorectal cancer, and marijuana use. Using Google Trends, relative search volume data were collected throughout the period of February 2004 to January 2015. Campaign information and disease statistics were obtained from governmental publications. Search activity trends were graphed and assessed with disease trends and the campaign interval. Pearson product correlation statistics and joinpoint methodology analyses were used to determine significance. Disease patterns and online activity across all 4 diseases were significantly correlated: HIV infection (r=.36, P<.001), stroke (r=.40, P<.001), colorectal cancer (r= -.41, P<.001), and substance use (r=.64, P<.001). Visual inspection and the joinpoint analysis showed significant correlations for the campaigns on colorectal cancer and marijuana use in stimulating search activity. No significant correlations were observed for the campaigns on stroke and HIV regarding search activity. The use of infoveillance shows promise as an alternative and inexpensive solution to disease surveillance and health campaign evaluation. Further research is needed to understand Google Trends as a valid and reliable tool for health research.

  5. [Selective attention and schizophrenia before the administration of neuroleptics].

    PubMed

    Lussier, I; Stip, E

    1999-01-01

    In recent years, the presence of attention deficits has been recognized as a key feature of schizophrenia. Past studies reveal that selective attention, or the ability to select relevant information while ignoring simultaneously irrelevant information, is disturbed in schizophrenic patients. According to Treisman feature-integration theory of selective attention, visual search for conjunctive targets (e.g., shape and color) requires controlled processes, that necessitate attention and operate in a serial manner. Reaction times (RTs) are therefore function of the number of stimuli in the display. When subjects are asked to detect the presence or absence of a target in an array of a variable number of stimuli, different performance patterns are expected for positive (present target) and negative trials (absent target). For positive trials, a self-terminating search is triggered, that is, the search is ended when the target is encountered. For negative trials, an exhaustive search strategy is displayed, where each stimulus is examined before the search can end; the RT slope pattern is thus double that of the positive trials. To assess the integrity of these processes, thirteen drug naive schizophrenic patients were compared to twenty normal control subjects. Neuroleptic naive patients were chosen as subjects to avoid the potential influence of medication and chronicity-related factors on performance. The subjects had to specify as fast as possible the presence or absence of the target in an array of a variable number of stimuli presented in a circular display, and comprising or not the target. Results showed that the patients can use self-terminating search strategies as well as normal control subjects. However, their ability to trigger exhaustive search strategies is impaired. Not only were patients slower than controls, but their pattern of RT results was different. These results argue in favor of an early impairment in selective attention capacities in schizophrenia, which appears before the introduction of neuroleptics. The attention performance was also shown to present some association to clinical symptoms.

  6. Disease Monitoring and Health Campaign Evaluation Using Google Search Activities for HIV and AIDS, Stroke, Colorectal Cancer, and Marijuana Use in Canada: A Retrospective Observational Study

    PubMed Central

    2016-01-01

    Background Infodemiology can offer practical and feasible health research applications through the practice of studying information available on the Web. Google Trends provides publicly accessible information regarding search behaviors in a population, which may be studied and used for health campaign evaluation and disease monitoring. Additional studies examining the use and effectiveness of Google Trends for these purposes remain warranted. Objective The objective of our study was to explore the use of infodemiology in the context of health campaign evaluation and chronic disease monitoring. It was hypothesized that following a launch of a campaign, there would be an increase in information seeking behavior on the Web. Second, increasing and decreasing disease patterns in a population would be associated with search activity patterns. This study examined 4 different diseases: human immunodeficiency virus (HIV) infection, stroke, colorectal cancer, and marijuana use. Methods Using Google Trends, relative search volume data were collected throughout the period of February 2004 to January 2015. Campaign information and disease statistics were obtained from governmental publications. Search activity trends were graphed and assessed with disease trends and the campaign interval. Pearson product correlation statistics and joinpoint methodology analyses were used to determine significance. Results Disease patterns and online activity across all 4 diseases were significantly correlated: HIV infection (r=.36, P<.001), stroke (r=.40, P<.001), colorectal cancer (r= −.41, P<.001), and substance use (r=.64, P<.001). Visual inspection and the joinpoint analysis showed significant correlations for the campaigns on colorectal cancer and marijuana use in stimulating search activity. No significant correlations were observed for the campaigns on stroke and HIV regarding search activity. Conclusions The use of infoveillance shows promise as an alternative and inexpensive solution to disease surveillance and health campaign evaluation. Further research is needed to understand Google Trends as a valid and reliable tool for health research. PMID:27733330

  7. The course of visual searching to a target in a fixed location: electrophysiological evidence from an emotional flanker task.

    PubMed

    Dong, Guangheng; Yang, Lizhu; Shen, Yue

    2009-08-21

    The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.

  8. Evidence for an attentional component of inhibition of return in visual search.

    PubMed

    Pierce, Allison M; Crouse, Monique D; Green, Jessica J

    2017-11-01

    Inhibition of return (IOR) is typically described as an inhibitory bias against returning attention to a recently attended location as a means of promoting efficient visual search. Most studies examining IOR, however, either do not use visual search paradigms or do not effectively isolate attentional processes, making it difficult to conclusively link IOR to a bias in attention. Here, we recorded ERPs during a simple visual search task designed to isolate the attentional component of IOR to examine whether an inhibitory bias of attention is observed and, if so, how it influences visual search behavior. Across successive visual search displays, we found evidence of both a broad, hemisphere-wide inhibitory bias of attention along with a focal, target location-specific facilitation. When the target appeared in the same visual hemifield in successive searches, responses were slower and the N2pc component was reduced, reflecting a bias of attention away from the previously attended side of space. When the target occurred at the same location in successive searches, responses were facilitated and the P1 component was enhanced, likely reflecting spatial priming of the target. These two effects are combined in the response times, leading to a reduction in the IOR effect for repeated target locations. Using ERPs, however, these two opposing effects can be isolated in time, demonstrating that the inhibitory biasing of attention still occurs even when response-time slowing is ameliorated by spatial priming. © 2017 Society for Psychophysiological Research.

  9. The Role of Prediction In Perception: Evidence From Interrupted Visual Search

    PubMed Central

    Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro

    2014-01-01

    Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440

  10. A novel visualization model for web search results.

    PubMed

    Nguyen, Tien N; Zhang, Jin

    2006-01-01

    This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.

  11. VisGets: coordinated visualizations for web-based information exploration and discovery.

    PubMed

    Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey

    2008-01-01

    In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

  12. Search guidance is proportional to the categorical specificity of a target cue.

    PubMed

    Schmidt, Joseph; Zelinsky, Gregory J

    2009-10-01

    Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.

  13. Typical visual search performance and atypical gaze behaviors in response to faces in Williams syndrome.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2016-01-01

    Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.

  14. Visual search for features and conjunctions following declines in the useful field of view.

    PubMed

    Cosman, Joshua D; Lees, Monica N; Lee, John D; Rizzo, Matthew; Vecera, Shaun P

    2012-01-01

    BACKGROUND/STUDY CONTEXT: Typical measures for assessing the useful field (UFOV) of view involve many components of attention. The objective of the current experiment was to examine differences in visual search efficiency for older individuals with and without UFOV impairment. The authors used a computerized screening instrument to assess the useful field of view and to characterize participants as having an impaired or normal UFOV. Participants also performed two visual search tasks, a feature search (e.g., search for a green target among red distractors) or a conjunction search (e.g., a green target with a gap on its left or right side among red distractors with gaps on the left or right and green distractors with gaps on the top or bottom). Visual search performance did not differ between UFOV impaired and unimpaired individuals when searching for a basic feature. However, search efficiency was lower for impaired individuals than unimpaired individuals when searching for a conjunction of features. The results suggest that UFOV decline in normal aging is associated with conjunction search. This finding suggests that the underlying cause of UFOV decline may arise from an overall decline in attentional efficiency. Because the useful field of view is a reliable predictor of driving safety, the results suggest that decline in the everyday visual behavior of older adults might arise from attentional declines.

  15. History effects in visual search for monsters: search times, choice biases, and liking.

    PubMed

    Chetverikov, Andrey; Kristjansson, Árni

    2015-02-01

    Repeating targets and distractors on consecutive visual search trials facilitates search performance, whereas switching targets and distractors harms search. In addition, search repetition leads to biases in free choice tasks, in that previously attended targets are more likely to be chosen than distractors. Another line of research has shown that attended items receive high liking ratings, whereas ignored distractors are rated negatively. Potential relations between the three effects are unclear, however. Here we simultaneously measured repetition benefits and switching costs for search times, choice biases, and liking ratings in color singleton visual search for "monster" shapes. We showed that if expectations from search repetition are violated, targets are liked to be less attended than otherwise. Choice biases were, on the other hand, affected by distractor repetition, but not by target/distractor switches. Target repetition speeded search times but had little influence on choice or liking. Our findings suggest that choice biases reflect distractor inhibition, and liking reflects the conflict associated with attending to previously inhibited stimuli, while speeded search follows both target and distractor repetition. Our results support the newly proposed affective-feedback-of-hypothesis-testing account of cognition, and additionally, shed new light on the priming of visual search.

  16. The involvement of central attention in visual search is determined by task demands.

    PubMed

    Han, Suk Won

    2017-04-01

    Attention, the mechanism by which a subset of sensory inputs is prioritized over others, operates at multiple processing stages. Specifically, attention enhances weak sensory signal at the perceptual stage, while it serves to select appropriate responses or consolidate sensory representations into short-term memory at the central stage. This study investigated the independence and interaction between perceptual and central attention. To do so, I used a dual-task paradigm, pairing a four-alternative choice task with a visual search task. The results showed that central attention for response selection was engaged in perceptual processing for visual search when the number of search items increased, thereby increasing the demand for serial allocation of focal attention. By contrast, central attention and perceptual attention remained independent as far as the demand for serial shifting of focal attention remained constant; decreasing stimulus contrast or increasing the set size of a parallel search did not evoke the involvement of central attention in visual search. These results suggest that the nature of concurrent visual search process plays a crucial role in the functional interaction between two different types of attention.

  17. Finding an emotional face in a crowd: emotional and perceptual stimulus factors influence visual search efficiency.

    PubMed

    Lundqvist, Daniel; Bruce, Neil; Öhman, Arne

    2015-01-01

    In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field.

  18. Crowded visual search in children with normal vision and children with visual impairment.

    PubMed

    Huurneman, Bianca; Cox, Ralf F A; Vlaskamp, Björn N S; Boonstra, F Nienke

    2014-03-01

    This study investigates the influence of oculomotor control, crowding, and attentional factors on visual search in children with normal vision ([NV], n=11), children with visual impairment without nystagmus ([VI-nys], n=11), and children with VI with accompanying nystagmus ([VI+nys], n=26). Exclusion criteria for children with VI were: multiple impairments and visual acuity poorer than 20/400 or better than 20/50. Three search conditions were presented: a row with homogeneous distractors, a matrix with homogeneous distractors, and a matrix with heterogeneous distractors. Element spacing was manipulated in 5 steps from 2 to 32 minutes of arc. Symbols were sized 2 times the threshold acuity to guarantee visibility for the VI groups. During simple row and matrix search with homogeneous distractors children in the VI+nys group were less accurate than children with NV at smaller spacings. Group differences were even more pronounced during matrix search with heterogeneous distractors. Search times were longer in children with VI compared to children with NV. The more extended impairments during serial search reveal greater dependence on oculomotor control during serial compared to parallel search. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Is There a Limit to the Superiority of Individuals with ASD in Visual Search?

    ERIC Educational Resources Information Center

    Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal

    2014-01-01

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…

  20. Concurrent deployment of visual attention and response selection bottleneck in a dual-task: Electrophysiological and behavioural evidence.

    PubMed

    Reimer, Christina B; Strobach, Tilo; Schubert, Torsten

    2017-12-01

    Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.

  1. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  2. Working memory dependence of spatial contextual cueing for visual search.

    PubMed

    Pollmann, Stefan

    2018-05-10

    When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers' explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory-guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top-down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory-guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto-occipital cortex are jointly activated by visual working memory and contextual cueing. © 2018 The British Psychological Society.

  3. Evaluating the Role of the Dorsolateral Prefrontal Cortex and Posterior Parietal Cortex in Memory-Guided Attention With Repetitive Transcranial Magnetic Stimulation.

    PubMed

    Wang, Min; Yang, Ping; Wan, Chaoyang; Jin, Zhenlan; Zhang, Junjun; Li, Ling

    2018-01-01

    The contents of working memory (WM) can affect the subsequent visual search performance, resulting in either beneficial or cost effects, when the visual search target is included in or spatially dissociated from the memorized contents, respectively. The right dorsolateral prefrontal cortex (rDLPFC) and the right posterior parietal cortex (rPPC) have been suggested to be associated with the congruence/incongruence effects of the WM content and the visual search target. Thus, in the present study, we investigated the role of the dorsolateral prefrontal cortex and the PPC in controlling the interaction between WM and attention during a visual search, using repetitive transcranial magnetic stimulation (rTMS). Subjects maintained a color in WM while performing a search task. The color cue contained the target (valid), the distractor (invalid) or did not reappear in the search display (neutral). Concurrent stimulation with the search onset showed that relative to rTMS over the vertex, rTMS over rPPC and rDLPFC further decreased the search reaction time, when the memory cue contained the search target. The results suggest that the rDLPFC and the rPPC are critical for controlling WM biases in human visual attention.

  4. Investigating the visual span in comparative search: the effects of task difficulty and divided attention.

    PubMed

    Pomplun, M; Reingold, E M; Shen, J

    2001-09-01

    In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.

  5. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  6. Reproducibility and Discriminability of Brain Patterns of Semantic Categories Enhanced by Congruent Audiovisual Stimuli

    PubMed Central

    Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: “old people” and “young people.” These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration. PMID:21750692

  7. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  8. What Drives Memory-Driven Attentional Capture? The Effects of Memory Type, Display Type, and Search Type

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.

    2009-01-01

    An important question is whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. Some past research has indicated that they do: Singleton distractors interfered more strongly with a visual search task when they…

  9. Quality metrics in high-dimensional data visualization: an overview and systematization.

    PubMed

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE

  10. Inhibitory guidance in visual search: the case of movement-form conjunctions.

    PubMed

    Dent, Kevin; Allen, Harriet A; Braithwaite, Jason J; Humphreys, Glyn W

    2012-02-01

    We used a probe-dot procedure to examine the roles of excitatory attentional guidance and distractor suppression in search for movement-form conjunctions. Participants in Experiment 1 completed a conjunction (moving X amongst moving Os and static Xs) and two single-feature (moving X amongst moving Os, and static X amongst static Os) conditions. "Active" participants searched for the target, whereas "passive" participants viewed the displays without responding. Subsequently, both groups located (left or right) a probe dot appearing in either an occupied or an unoccupied location. In the conjunction condition, the active group located probes presented on static distractors more slowly than probes presented on moving distractors, reversing the direction of the difference found within the passive group. This disadvantage for probes on static items was much stronger in conjunction than in single-feature search. The same pattern of results was replicated in Experiment 2, which used a go/no-go procedure. Experiment 3 extended the go/no-go procedure to the case of search for a static target and revealed increased probe localisation times as a consequence of active search, primarily for probes on moving distractor items. The results demonstrated attentional guidance by inhibition of distractors in conjunction search.

  11. The impact of visual layout factors on performance in Web pages: a cross-language study.

    PubMed

    Parush, Avi; Shwarts, Yonit; Shtub, Avy; Chandra, M Jeya

    2005-01-01

    Visual layout has a strong impact on performance and is a critical factor in the design of graphical user interfaces (GUIs) and Web pages. Many design guidelines employed in Web page design were inherited from human performance literature and GUI design studies and practices. However, few studies have investigated the more specific patterns of performance with Web pages that may reflect some differences between Web page and GUI design. We investigated interactions among four visual layout factors in Web page design (quantity of links, alignment, grouping indications, and density) in two experiments: one with pages in Hebrew, entailing right-to-left reading, and the other with English pages, entailing left-to-right reading. Some performance patterns (measured by search times and eye movements) were similar between languages. Performance was particularly poor in pages with many links and variable densities, but it improved with the presence of uniform density. Alignment was not shown to be a performance-enhancing factor. The findings are discussed in terms of the similarities and differences in the impact of layout factors between GUIs and Web pages. Actual or potential applications of this research include specific guidelines for Web page design.

  12. INTERSPIA: a web application for exploring the dynamics of protein-protein interactions among multiple species.

    PubMed

    Kwon, Daehong; Lee, Daehwan; Kim, Juyeon; Lee, Jongin; Sim, Mikang; Kim, Jaebum

    2018-05-09

    Proteins perform biological functions through cascading interactions with each other by forming protein complexes. As a result, interactions among proteins, called protein-protein interactions (PPIs) are not completely free from selection constraint during evolution. Therefore, the identification and analysis of PPI changes during evolution can give us new insight into the evolution of functions. Although many algorithms, databases and websites have been developed to help the study of PPIs, most of them are limited to visualize the structure and features of PPIs in a chosen single species with limited functions in the visualization perspective. This leads to difficulties in the identification of different patterns of PPIs in different species and their functional consequences. To resolve these issues, we developed a web application, called INTER-Species Protein Interaction Analysis (INTERSPIA). Given a set of proteins of user's interest, INTERSPIA first discovers additional proteins that are functionally associated with the input proteins and searches for different patterns of PPIs in multiple species through a server-side pipeline, and second visualizes the dynamics of PPIs in multiple species using an easy-to-use web interface. INTERSPIA is freely available at http://bioinfo.konkuk.ac.kr/INTERSPIA/.

  13. The Importance of the Eye Area in Face Identification Abilities and Visual Search Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Larsson, Matilda; Bjallmark, Anna; Falkmer, Torbjorn

    2010-01-01

    Partly claimed to explain social difficulties observed in people with Asperger syndrome, face identification and visual search strategies become important. Previous research findings are, however, disparate. In order to explore face identification abilities and visual search strategies, with special focus on the importance of the eye area, 24…

  14. Overcoming hurdles in translating visual search research between the lab and the field.

    PubMed

    Clark, Kait; Cain, Matthew S; Adamo, Stephen H; Mitroff, Stephen R

    2012-01-01

    Research in visual search can be vital to improving performance in careers such as radiology and airport security screening. In these applied, or "field," searches, accuracy is critical, and misses are potentially fatal; however, despite the importance of performing optimally, radiological and airport security searches are nevertheless flawed. Extensive basic research in visual search has revealed cognitive mechanisms responsible for successful visual search as well as a variety of factors that tend to inhibit or improve performance. Ideally, the knowledge gained from such laboratory-based research could be directly applied to field searches, but several obstacles stand in the way of straightforward translation; the tightly controlled visual searches performed in the lab can be drastically different from field searches. For example, they can differ in terms of the nature of the stimuli, the environment in which the search is taking place, and the experience and characteristics of the searchers themselves. The goal of this chapter is to discuss these differences and how they can present hurdles to translating lab-based research to field-based searches. Specifically, most search tasks in the lab entail searching for only one target per trial, and the targets occur relatively frequently, but field searches may contain an unknown and unlimited number of targets, and the occurrence of targets can be rare. Additionally, participants in lab-based search experiments often perform under neutral conditions and have no formal training or experience in search tasks; conversely, career searchers may be influenced by the motivation to perform well or anxiety about missing a target, and they have undergone formal training and accumulated significant experience searching. This chapter discusses recent work that has investigated the impacts of these differences to determine how each factor can influence search performance. Knowledge gained from the scientific exploration of search can be applied to field searches but only when considering and controlling for the differences between lab and field.

  15. Changes in search rate but not in the dynamics of exogenous attention in action videogame players.

    PubMed

    Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne

    2011-11-01

    Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.

  16. ERP effects of spatial attention and display search with unilateral and bilateral stimulus displays.

    PubMed

    Lange, J J; Wijers, A A; Mulder, L J; Mulder, G

    1999-07-01

    Two experiments were performed in which the effects of selective spatial attention on the ERPs elicited by unilateral and bilateral stimulus arrays were compared. In Experiment 1, subjects received a series of grating patterns. In the unilateral condition these gratings were presented one at a time, randomly to the right or left of fixation. In the bilateral condition, gratings were presented in pairs, one to each side of fixation. In the unilateral condition standard ERP effects of visual spatial attention were observed. However, in the bilateral condition we failed to observe an attention related posterior contralateral positivity (overlapping the P1 and N1 components, latency interval about 100-250 ms), as reported in several previous studies. In Experiment 2, we investigated whether attention related ERP lateralizations are affected by the task requirement to search among multiple objects in the visual field. We employed a task paradigm identical to that used by Luck et al. (Luck, S.J., Heinze, H.J., Mangun, G.R., Hillyard, S.A., 1990. Visual event-related potentials index focused attention within bilateral stimulus arrays. II. Functional dissociation of P1 and N1 components. Electroencephalogr. Clin. Neurophysiol. 75, 528-542). Four letters were presented to a visual hemifield, simultaneously to both the attended and unattended hemifields in the bilateral conditions, and to one hemifield only in the unilateral conditions. In a focused attention condition, subjects searched for a target letter at a fixed position, whereas they searched for the target letter among all four letters in the divided attention condition (as in the experiment of Luck et al., 1990). In the bilateral focused attention condition, only the contralateral P1 was enhanced. In the bilateral divided attention condition a prolonged posterior positivity was observed over the hemisphere contralateral to the attended hemifield, comparable to the results of Luck et al. (1990). A comparison of the ERPs elicited in the focused and divided attention conditions revealed a prolonged 'search related negativity'. We discuss possible interactions between this negativity and attention related lateralizations. The display search negativity consisted of two phases, one phase comprised a midline occipital negativity, developing first over the ipsilateral scalp, while the second phase involved two symmetrical occipitotemporal negativities, strongly resembling the N1 in their topography. The display search effect could be modelled with a dipole in a medial occipital (possibly striate) region and two symmetrical dipoles in occipitotemporal brain areas. We hypothesize that this effect reflects a process of rechecking the decaying information of iconic memory in the occipitotemporal object recognition pathway.

  17. Visual Search for Faces with Emotional Expressions

    ERIC Educational Resources Information Center

    Frischen, Alexandra; Eastwood, John D.; Smilek, Daniel

    2008-01-01

    The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the…

  18. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    PubMed

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.

  19. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  20. Looking for ideas: Eye behavior during goal-directed internally focused cognition☆

    PubMed Central

    Walcher, Sonja; Körner, Christof; Benedek, Mathias

    2017-01-01

    Humans have a highly developed visual system, yet we spend a high proportion of our time awake ignoring the visual world and attending to our own thoughts. The present study examined eye movement characteristics of goal-directed internally focused cognition. Deliberate internally focused cognition was induced by an idea generation task. A letter-by-letter reading task served as external task. Idea generation (vs. reading) was associated with more and longer blinks and fewer microsaccades indicating an attenuation of visual input. Idea generation was further associated with more and shorter fixations, more saccades and saccades with higher amplitudes as well as heightened stimulus-independent variation of eye vergence. The latter results suggest a coupling of eye behavior to internally generated information and associated cognitive processes, i.e. searching for ideas. Our results support eye behavior patterns as indicators of goal-directed internally focused cognition through mechanisms of attenuation of visual input and coupling of eye behavior to internally generated information. PMID:28689088

  1. More than a memory: Confirmatory visual search is not caused by remembering a visual feature.

    PubMed

    Rajsic, Jason; Pratt, Jay

    2017-10-01

    Previous research has demonstrated a preference for positive over negative information in visual search; asking whether a target object is green biases search towards green objects, even when this entails more perceptual processing than searching non-green objects. The present study investigated whether this confirmatory search bias is due to the presence of one particular (e.g., green) color in memory during search. Across two experiments, we show that this is not the critical factor in generating a confirmation bias in search. Search slowed proportionally to the number of stimuli whose color matched the color held in memory only when the color was remembered as part of the search instructions. These results suggest that biased search for information is due to a particular attentional selection strategy, and not to memory-driven attentional biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Different predictors of multiple-target search accuracy between nonprofessional and professional visual searchers.

    PubMed

    Biggs, Adam T; Mitroff, Stephen R

    2014-01-01

    Visual search, locating target items among distractors, underlies daily activities ranging from critical tasks (e.g., looking for dangerous objects during security screening) to commonplace ones (e.g., finding your friends in a crowded bar). Both professional and nonprofessional individuals conduct visual searches, and the present investigation is aimed at understanding how they perform similarly and differently. We administered a multiple-target visual search task to both professional (airport security officers) and nonprofessional participants (members of the Duke University community) to determine how search abilities differ between these populations and what factors might predict accuracy. There were minimal overall accuracy differences, although the professionals were generally slower to respond. However, the factors that predicted accuracy varied drastically between groups; variability in search consistency-how similarly an individual searched from trial to trial in terms of speed-best explained accuracy for professional searchers (more consistent professionals were more accurate), whereas search speed-how long an individual took to complete a search when no targets were present-best explained accuracy for nonprofessional searchers (slower nonprofessionals were more accurate). These findings suggest that professional searchers may utilize different search strategies from those of nonprofessionals, and that search consistency, in particular, may provide a valuable tool for enhancing professional search accuracy.

  3. Visual Foraging With Fingers and Eye Gaze

    PubMed Central

    Thornton, Ian M.; Smith, Irene J.; Chetverikov, Andrey; Kristjánsson, Árni

    2016-01-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints. PMID:27433323

  4. Encouraging top-down attention in visual search:A developmental perspective.

    PubMed

    Lookadoo, Regan; Yang, Yingying; Merrill, Edward C

    2017-10-01

    Four experiments are reported in which 60 younger children (7-8 years old), 60 older children (10-11 years old), and 60 young adults (18-25 years old) performed a conjunctive visual search task (15 per group in each experiment). The number of distractors of each feature type was unbalanced across displays to evaluate participants' ability to restrict search to the smaller subset of features. The use of top-down attention processes to restrict search was encouraged by providing external aids for identifying and maintaining attention on the smaller set. In Experiment 1, no external assistance was provided. In Experiment 2, precues and instructions were provided to focus attention on that subset. In Experiment 3, trials in which the smaller subset was represented by the same feature were presented in alternating blocks to eliminate the need to switch attention between features from trial to trial. In Experiment 4, consecutive blocks of the same subset features were presented in the first or second half of the experiment, providing additional consistency. All groups benefited from external support of top-down attention, although the pattern of improvement varied across experiments. The younger children benefited most from precues and instruction, using the subset search strategy when instructed. Furthermore, younger children benefited from blocking trials only when blocks of the same features did not alternate. Older participants benefited from the blocking of trials in both Experiments 3 and 4, but not from precues and instructions. Hence, our results revealed both malleability and limits of children's top-down control of attention.

  5. Visual search accelerates during adolescence.

    PubMed

    Burggraaf, Rudolf; van der Geest, Jos N; Frens, Maarten A; Hooge, Ignace T C

    2018-05-01

    We studied changes in visual-search performance and behavior during adolescence. Search performance was analyzed in terms of reaction time and response accuracy. Search behavior was analyzed in terms of the objects fixated and the duration of these fixations. A large group of adolescents (N = 140; age: 12-19 years; 47% female, 53% male) participated in a visual-search experiment in which their eye movements were recorded with an eye tracker. The experiment consisted of 144 trials (50% with a target present), and participants had to decide whether a target was present. Each trial showed a search display with 36 Gabor patches placed on a hexagonal grid. The target was a vertically oriented element with a high spatial frequency. Nontargets differed from the target in spatial frequency, orientation, or both. Search performance and behavior changed during adolescence; with increasing age, fixation duration and reaction time decreased. Response accuracy, number of fixations, and selection of elements to fixate upon did not change with age. Thus, the speed of foveal discrimination increases with age, while the efficiency of peripheral selection does not change. We conclude that the way visual information is gathered does not change during adolescence, but the processing of visual information becomes faster.

  6. Combining local and global limitations of visual search.

    PubMed

    Põder, Endel

    2017-04-01

    There are different opinions about the roles of local interactions and central processing capacity in visual search. This study attempts to clarify the problem using a new version of relevant set cueing. A central precue indicates two symmetrical segments (that may contain a target object) within a circular array of objects presented briefly around the fixation point. The number of objects in the relevant segments, and density of objects in the array were varied independently. Three types of search experiments were run: (a) search for a simple visual feature (color, size, and orientation); (b) conjunctions of simple features; and (c) spatial configuration of simple features (rotated Ts). For spatial configuration stimuli, the results were consistent with a fixed global processing capacity and standard crowding zones. For simple features and their conjunctions, the results were different, dependent on the features involved. While color search exhibits virtually no capacity limits or crowding, search for an orientation target was limited by both. Results for conjunctions of features can be partly explained by the results from the respective features. This study shows that visual search is limited by both local interference and global capacity, and the limitations are different for different visual features.

  7. Modeling peripheral vision for moving target search and detection.

    PubMed

    Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre

    2012-06-01

    Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.

  8. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  9. Visual search for feature and conjunction targets with an attention deficit.

    PubMed

    Arguin, M; Joanette, Y; Cavanagh, P

    1993-01-01

    Abstract Brain-damaged subjects who had previously been identified as suffering from a visual attention deficit for contralesional stimulation were tested on a series of visual search tasks. The experiments examined the hypothesis that the processing of single features is preattentive but that feature integration, necessary for the correct perception of conjunctions of features, requires attention (Treisman & Gelade, 1980 Treisman & Sato, 1990). Subjects searched for a feature target (orientation or color) or for a conjunction target (orientation and color) in unilateral displays in which the number of items presented was variable. Ocular fixation was controlled so that trials on which eye movements occurred were cancelled. While brain-damaged subjects with a visual attention disorder (VAD subjects) performed similarly to normal controls in feature search tasks, they showed a marked deficit in conjunction search. Specifically, VAD subjects exhibited an important reduction of their serial search rates for a conjunction target with contralesional displays. In support of Treisman's feature integration theory, a visual attention deficit leads to a marked impairment in feature integration whereas it does not appear to affect feature encoding.

  10. Comparing visual search and eye movements in bilinguals and monolinguals

    PubMed Central

    Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.

    2017-01-01

    Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116

  11. Visualizing a High Recall Search Strategy Output for Undergraduates in an Exploration Stage of Researching a Term Paper.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie; Stevenson, John

    2002-01-01

    Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…

  12. Visual Search Across the Life Span

    ERIC Educational Resources Information Center

    Hommel, Bernhard; Li, Karen Z. H.; Li, Shu-Chen

    2004-01-01

    Gains and losses in visual search were studied across the life span in a representative sample of 298 individuals from 6 to 89 years of age. Participants searched for single-feature and conjunction targets of high or low eccentricity. Search was substantially slowed early and late in life, age gradients were more pronounced in conjunction than in…

  13. White matter tract integrity predicts visual search performance in young and older adults.

    PubMed

    Bennett, Ilana J; Motes, Michael A; Rao, Neena K; Rypma, Bart

    2012-02-01

    Functional imaging research has identified frontoparietal attention networks involved in visual search, with mixed evidence regarding whether different networks are engaged when the search target differs from distracters by a single (elementary) versus multiple (conjunction) features. Neural correlates of visual search, and their potential dissociation, were examined here using integrity of white matter connecting the frontoparietal networks. The effect of aging on these brain-behavior relationships was also of interest. Younger and older adults performed a visual search task and underwent diffusion tensor imaging (DTI) to reconstruct 2 frontoparietal (superior and inferior longitudinal fasciculus; SLF and ILF) and 2 midline (genu, splenium) white matter tracts. As expected, results revealed age-related declines in conjunction, but not elementary, search performance; and in ILF and genu tract integrity. Importantly, integrity of the superior longitudinal fasciculus, ILF, and genu tracts predicted search performance (conjunction and elementary), with no significant age group differences in these relationships. Thus, integrity of white matter tracts connecting frontoparietal attention networks contributes to search performance in younger and older adults. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. White Matter Tract Integrity Predicts Visual Search Performance in Young and Older Adults

    PubMed Central

    Bennett, Ilana J.; Motes, Michael A.; Rao, Neena K.; Rypma, Bart

    2011-01-01

    Functional imaging research has identified fronto-parietal attention networks involved in visual search, with mixed evidence regarding whether different networks are engaged when the search target differs from distracters by a single (elementary) versus multiple (conjunction) features. Neural correlates of visual search, and their potential dissociation, were examined here using integrity of white matter connecting the fronto-parietal networks. The effect of aging on these brain-behavior relationships was also of interest. Younger and older adults performed a visual search task and underwent diffusion tensor imaging (DTI) to reconstruct two fronto-parietal (superior and inferior longitudinal fasciculus, SLF and ILF) and two midline (genu, splenium) white matter tracts. As expected, results revealed age-related declines in conjunction, but not elementary, search performance; and in ILF and genu tract integrity. Importantly, integrity of the SLF, ILF, and genu tracts predicted search performance (conjunction and elementary), with no significant age group differences in these relationships. Thus, integrity of white matter tracts connecting fronto-parietal attention networks contributes to search performance in younger and older adults. PMID:21402431

  15. How Attention Affects Spatial Resolution

    PubMed Central

    Carrasco, Marisa; Barbot, Antoine

    2015-01-01

    We summarize and discuss a series of psychophysical studies on the effects of spatial covert attention on spatial resolution, our ability to discriminate fine patterns. Heightened resolution is beneficial in most, but not all, visual tasks. We show how endogenous attention (voluntary, goal driven) and exogenous attention (involuntary, stimulus driven) affect performance on a variety of tasks mediated by spatial resolution, such as visual search, crowding, acuity, and texture segmentation. Exogenous attention is an automatic mechanism that increases resolution regardless of whether it helps or hinders performance. In contrast, endogenous attention flexibly adjusts resolution to optimize performance according to task demands. We illustrate how psychophysical studies can reveal the underlying mechanisms of these effects and allow us to draw linking hypotheses with known neurophysiological effects of attention. PMID:25948640

  16. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  17. Motivation and short-term memory in visual search: Attention's accelerator revisited.

    PubMed

    Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton

    2018-05-01

    A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Investigating the role of the superior colliculus in active vision with the visual search paradigm.

    PubMed

    Shen, Kelly; Valero, Jerome; Day, Gregory S; Paré, Martin

    2011-06-01

    We review here both the evidence that the functional visuomotor organization of the optic tectum is conserved in the primate superior colliculus (SC) and the evidence for the linking proposition that SC discriminating activity instantiates saccade target selection. We also present new data in response to questions that arose from recent SC visual search studies. First, we observed that SC discriminating activity predicts saccade initiation when monkeys perform an unconstrained search for a target defined by either a single visual feature or a conjunction of two features. Quantitative differences between the results in these two search tasks suggest, however, that SC discriminating activity does not only reflect saccade programming. This finding concurs with visual search studies conducted in posterior parietal cortex and the idea that, during natural active vision, visual attention is shifted concomitantly with saccade programming. Second, the analysis of a large neuronal sample recorded during feature search revealed that visual neurons in the superficial layers do possess discriminating activity. In addition, the hypotheses that there are distinct types of SC neurons in the deeper layers and that they are differently involved in saccade target selection were not substantiated. Third, we found that the discriminating quality of single-neuron activity substantially surpasses the ability of the monkeys to discriminate the target from distracters, raising the possibility that saccade target selection is a noisy process. We discuss these new findings in light of the visual search literature and the view that the SC is a visual salience map for orienting eye movements. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  19. Monitoring Processes in Visual Search Enhanced by Professional Experience: The Case of Orange Quality-Control Workers

    PubMed Central

    Visalli, Antonino; Vallesi, Antonino

    2018-01-01

    Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target) while in the other block the target was a Smurfette doll (neutral target). The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages. PMID:29497392

  20. Effect of attention on the detection and identification of masked spatial patterns.

    PubMed

    Põder, Endel

    2005-01-01

    The effect of attention on the detection and identification of vertically and horizontally oriented Gabor patterns in the condition of simultaneous masking with obliquely oriented Gabors was studied. Attention was manipulated by varying the set size in a visual-search experiment. In the first experiment, small target Gabors were presented on the background of larger masking Gabors. In the detection task, the effect of set size was as predicted by unlimited-capacity signal detection theory. In the orientation identification task, increasing the set size from 1 to 8 resulted in a much larger decline in performance. The results of the additional experiments suggest that attention can reduce the crowding effect of maskers.

  1. Slowed Search in the Context of Unimpaired Grouping in Autism: Evidence from Multiple Conjunction Search.

    PubMed

    Keehn, Brandon; Joseph, Robert M

    2016-03-01

    In multiple conjunction search, the target is not known in advance but is defined only with respect to the distractors in a given search array, thus reducing the contributions of bottom-up and top-down attentional and perceptual processes during search. This study investigated whether the superior visual search skills typically demonstrated by individuals with autism spectrum disorder (ASD) would be evident in multiple conjunction search. Thirty-two children with ASD and 32 age- and nonverbal IQ-matched typically developing (TD) children were administered a multiple conjunction search task. Contrary to findings from the large majority of studies on visual search in ASD, response times of individuals with ASD were significantly slower than those of their TD peers. Evidence of slowed performance in ASD suggests that the mechanisms responsible for superior ASD performance in other visual search paradigms are not available in multiple conjunction search. Although the ASD group failed to exhibit superior performance, they showed efficient search and intertrial priming levels similar to the TD group. Efficient search indicates that ASD participants were able to group distractors into distinct subsets. In summary, while demonstrating grouping and priming effects comparable to those exhibited by their TD peers, children with ASD were slowed in their performance on a multiple conjunction search task, suggesting that their usual superior performance in visual search tasks is specifically dependent on top-down and/or bottom-up attentional and perceptual processes. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  2. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity

    PubMed Central

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739

  3. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.

    PubMed

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.

  4. Contrasting vertical and horizontal representations of affect in emotional visual search.

    PubMed

    Damjanovic, Ljubica; Santiago, Julio

    2016-02-01

    Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the 'up = good' metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the 'up = good' metaphor is more salient and readily activated than the 'right = good' metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings.

  5. Cortical interactions in vision and awareness: hierarchies in reverse.

    PubMed

    Juan, Chi-Hung; Campana, Gianluca; Walsh, Vincent

    2004-01-01

    The anatomical connections between visual areas can be organized in 'feedforward', 'feedback' or 'horizontal' laminar patterns. We report here four experiments that test the function of some of the feedback projections in visual cortex. Projections from V5 to V1 have been suggested to be important in visual awareness, and in the first experiment we show this to be the case in the blindsight patient GY. This demonstration is replicated, in principle, in the second experiment and we also show the timing of the V5-V1 interaction to correspond to findings from single unit physiology. In the third experiment we show that V1 is important for stimulus detection in visual search arrays and that the timing of V1 interference with TMS is late (up to 240 ms after the onset of the visual array). Finally we report an experiment showing that the parietal cortex is not involved in visual motion priming, whereas V5 is, suggesting that the parietal cortex does not modulate V5 in this task. We interpret the data in terms of Bullier's recent physiological recordings and Ahissar and Hochstein's reverse hierarchy theory of vision.

  6. Neural Activity Associated with Visual Search for Line Drawings on AAC Displays: An Exploration of the Use of fMRI.

    PubMed

    Wilkinson, Krista M; Dennis, Nancy A; Webb, Christina E; Therrien, Mari; Stradtman, Megan; Farmer, Jacquelyn; Leach, Raevynn; Warrenfeltz, Megan; Zeuner, Courtney

    2015-01-01

    Visual aided augmentative and alternative communication (AAC) consists of books or technologies that contain visual symbols to supplement spoken language. A common observation concerning some forms of aided AAC is that message preparation can be frustratingly slow. We explored the uses of fMRI to examine the neural correlates of visual search for line drawings on AAC displays in 18 college students under two experimental conditions. Under one condition, the location of the icons remained stable and participants were able to learn the spatial layout of the display. Under the other condition, constant shuffling of the locations of the icons prevented participants from learning the layout, impeding rapid search. Brain activation was contrasted under these conditions. Rapid search in the stable display was associated with greater activation of cortical and subcortical regions associated with memory, motor learning, and dorsal visual pathways compared to the search in the unpredictable display. Rapid search for line drawings on stable AAC displays involves not just the conceptual knowledge of the symbol meaning but also the integration of motor, memory, and visual-spatial knowledge about the display layout. Further research must study individuals who use AAC, as well as the functional effect of interventions that promote knowledge about array layout.

  7. Predictive distractor context facilitates attentional selection of high, but not intermediate and low, salience targets.

    PubMed

    Töllner, Thomas; Conci, Markus; Müller, Hermann J

    2015-03-01

    It is well established that we can focally attend to a specific region in visual space without shifting our eyes, so as to extract action-relevant sensory information from covertly attended locations. The underlying mechanisms that determine how fast we engage our attentional spotlight in visual-search scenarios, however, remain controversial. One dominant view advocated by perceptual decision-making models holds that the times taken for focal-attentional selection are mediated by an internal template that biases perceptual coding and selection decisions exclusively through target-defining feature coding. This notion directly predicts that search times remain unaffected whether or not participants can anticipate the upcoming distractor context. Here we tested this hypothesis by employing an illusory-figure localization task that required participants to search for an invariant target amongst a variable distractor context, which gradually changed--either randomly or predictably--as a function of distractor-target similarity. We observed a graded decrease in internal focal-attentional selection times--correlated with external behavioral latencies--for distractor contexts of higher relative to lower similarity to the target. Critically, for low but not intermediate and high distractor-target similarity, these context-driven effects were cortically and behaviorally amplified when participants could reliably predict the type of distractors. This interactive pattern demonstrates that search guidance signals can integrate information about distractor, in addition to target, identities to optimize distractor-target competition for focal-attentional selection. © 2014 Wiley Periodicals, Inc.

  8. Does the thinking aloud condition affect the search for pulmonary nodules?

    NASA Astrophysics Data System (ADS)

    Littlefair, Stephen; Brennan, Patrick; Reed, Warren; Williams, Mark; Pietrzyk, Mariusz W.

    2012-02-01

    Aim: To measure the effect of thinking aloud on perceptual accuracy and visual search behavior during chest radiograph interpretation for pulmonary nodules. Background: Thinking Aloud (TA) is an empirical research method used by researchers in cognitive psychology and behavioural analysis. In this pilot study we wanted to examine whether TA had an effect on the perceptual accuracy and search patterns of subjects looking for pulmonary nodules on adult posterioranterior chest radiographs (PA CxR). Method: Seven academics within Medical Radiation Sciences at The University of Sydney participated in two reading sessions with and without TA. Their task was to localize pulmonary nodules on 30 PA CxR using mouse clicks and rank their confidence levels of nodule presence. Eye-tracking recordings were collected during both viewing sessions. Time to first fixation, duration of first fixation, number of fixations, cumulative time of fixation and total viewing time were analysed. In addition, ROC analysis was conducted on collected outcome using DBM methodology. Results: Time to first nodule fixation was significantly longer (p=0.001) and duration of first fixation was significantly shorter (p=0.043). No significant difference was observed in ROC AUC scores between control and TA conditions. Conclusion: Our results confirm that TA has little effect on perceptual ability or performance, except for prolonging the task. However, there were significant differences in visual search behavior. Future researchers in radio-diagnosis could use the think aloud condition rather than silence so as to more closely replicate the clinical scenario.

  9. Mechanisms and neural basis of object and pattern recognition: a study with chess experts.

    PubMed

    Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-11-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.

  10. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  11. Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome

    ERIC Educational Resources Information Center

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-01-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between…

  12. Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia

    ERIC Educational Resources Information Center

    Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray

    2012-01-01

    The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…

  13. Visual Search in ASD: Instructed Versus Spontaneous Local and Global Processing.

    PubMed

    Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2016-09-01

    Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual processing in ASD versus TD children. Experiment 1 revealed both groups to be equally sensitive to the absence or presence of a distractor, regardless of the type of target or type of distractor. Experiment 2 revealed a differential effect of task instruction for ASD compared to TD, regardless of the type of target. Taken together, these results stress the importance of task factors in the study of local-global visual processing in ASD.

  14. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  15. Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Hout, Michael C.; Goldinger, Stephen D.

    2012-01-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…

  16. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  17. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-08-28

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.

  18. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858

  19. Parietal blood oxygenation level-dependent response evoked by covert visual search reflects set-size effect in monkeys.

    PubMed

    Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P

    2014-03-01

    Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Does Central Vision Loss Impair Visual Search Performance of Adults More than Children?

    PubMed

    Satgunam, PremNandhini; Luo, Gang

    2018-05-01

    In general, young adults with normal vision show the best visual search performance when compared with children and older adults. Through our study, we show that this trend is not observed in individuals with vision impairment. An interaction effect of vision impairment with visual development and aging is observed. Performance in many visual tasks typically shows improvement with age until young adulthood and then declines with aging. Using a visual search task, this study investigated whether a similar age effect on performance is present in people with central vision loss. A total of 98 participants, 37 with normal sight (NS) and 61 with visual impairment (VI) searched for targets in 150 real-world digital images. Search performance was quantified by an integrated measure combining speed and accuracy. Participant ages ranged from 5 to 74 years, visual acuity from -0.14 (20/14.5) to 1.16 logMAR (20/290), and log contrast sensitivity (CS) from 0.48 to 2.0. Data analysis was performed with participants divided into three age groups: children (aged <14 years, n = 25), young adults (aged 14 to 45 years, n = 47), and older adults (aged >45 years, n = 26). Regression (r = 0.7) revealed CS (P < .001) and age (P = .003) were significant predictors of search performance. Performance of VI participants was normalized to the age-matched average performance of the NS group. In the VI group, it was found that children's normalized performance (52%) was better than both young (39%, P = .05) and older (40%, P = .048) adults. Unlike NS participants, young adults in the VI group may not have search ability superior to children with VI, despite having the same level of visual functions (quantified by visual acuity and CS). This could be because of vision impairment limiting the developmental acquisition of the age dividend for peak performance. Older adults in the VI group had the worst performance, indicating an interaction of aging.

  1. Visual selective attention in amnestic mild cognitive impairment.

    PubMed

    McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E

    2014-11-01

    Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Abnormal early brain responses during visual search are evident in schizophrenia but not bipolar affective disorder.

    PubMed

    VanMeerten, Nicolaas J; Dubke, Rachel E; Stanwyck, John J; Kang, Seung Suk; Sponheim, Scott R

    2016-01-01

    People with schizophrenia show deficits in processing visual stimuli but neural abnormalities underlying the deficits are unclear and it is unknown whether such functional brain abnormalities are present in other severe mental disorders or in individuals who carry genetic liability for schizophrenia. To better characterize brain responses underlying visual search deficits and test their specificity to schizophrenia we gathered behavioral and electrophysiological responses during visual search (i.e., Span of Apprehension [SOA] task) from 38 people with schizophrenia, 31 people with bipolar disorder, 58 biological relatives of people with schizophrenia, 37 biological relatives of people with bipolar disorder, and 65 non-psychiatric control participants. Through subtracting neural responses associated with purely sensory aspects of the stimuli we found that people with schizophrenia exhibited reduced early posterior task-related neural responses (i.e., Span Endogenous Negativity [SEN]) while other groups showed normative responses. People with schizophrenia exhibited longer reaction times than controls during visual search but nearly identical accuracy. Those individuals with schizophrenia who had larger SENs performed more efficiently (i.e., shorter reaction times) on the SOA task suggesting that modulation of early visual cortical responses facilitated their visual search. People with schizophrenia also exhibited a diminished P300 response compared to other groups. Unaffected first-degree relatives of people with bipolar disorder and schizophrenia showed an amplified N1 response over posterior brain regions in comparison to other groups. Diminished early posterior brain responses are associated with impaired visual search in schizophrenia and appear to be specifically associated with the neuropathology of schizophrenia. Published by Elsevier B.V.

  3. Effects of contour enhancement on low-vision preference and visual search.

    PubMed

    Satgunam, Premnandhini; Woods, Russell L; Luo, Gang; Bronstad, P Matthew; Reynolds, Zachary; Ramachandra, Chaithanya; Mel, Bartlett W; Peli, Eli

    2012-09-01

    To determine whether image enhancement improves visual search performance and whether enhanced images were also preferred by subjects with vision impairment. Subjects (n = 24) with vision impairment (vision: 20/52 to 20/240) completed visual search and preference tasks for 150 static images that were enhanced to increase object contours' visual saliency. Subjects were divided into two groups and were shown three enhancement levels. Original and medium enhancements were shown to both groups. High enhancement was shown to group 1, and low enhancement was shown to group 2. For search, subjects pointed to an object that matched a search target displayed at the top left of the screen. An "integrated search performance" measure (area under the curve of cumulative correct response rate over search time) quantified performance. For preference, subjects indicated the preferred side when viewing the same image with different enhancement levels on side-by-side high-definition televisions. Contour enhancement did not improve performance in the visual search task. Group 1 subjects significantly (p < 0.001) rejected the High enhancement, and showed no preference for medium enhancement over the original images. Group 2 subjects significantly preferred (p < 0.001) both the medium and the low enhancement levels over original. Contrast sensitivity was correlated with both preference and performance; subjects with worse contrast sensitivity performed worse in the search task (ρ = 0.77, p < 0.001) and preferred more enhancement (ρ = -0.47, p = 0.02). No correlation between visual search performance and enhancement preference was found. However, a small group of subjects (n = 6) in a narrow range of mid-contrast sensitivity performed better with the enhancement, and most (n = 5) also preferred the enhancement. Preferences for image enhancement can be dissociated from search performance in people with vision impairment. Further investigations are needed to study the relationships between preference and performance for a narrow range of mid-contrast sensitivity where a beneficial effect of enhancement may exist.

  4. Use of an augmented-vision device for visual search by patients with tunnel vision.

    PubMed

    Luo, Gang; Peli, Eli

    2006-09-01

    To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VFs) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF, 8 degrees -11 degrees wide) carried out the search over a 90 degrees x 74 degrees area, and nine subjects (VF, 7 degrees -16 degrees wide) carried out the search over a 66 degrees x 52 degrees area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in the larger and the smaller area searches. When using the device, a significant reduction in search time (28% approximately 74%) was demonstrated by all three subjects in the larger area search and by subjects with VFs wider than 10 degrees in the smaller area search (average, 22%). Directness and gaze speed accounted for 90% of the variability of search time. Although performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. Because improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks.

  5. Advanced Video Activity Analytics (AVAA): Human Factors Evaluation

    DTIC Science & Technology

    2015-05-01

    video, and 3) creating and saving annotations (Fig. 11). (The logging program was updated after the pilot to also capture search clicks.) Playing and... visual search task and the auditory task together and thus automatically focused on the visual task. Alternatively, the operator may have intentionally...affect performance on the primary task; however, in the current test there was no apparent effect on the operator’s performance in the visual search task

  6. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.

    2000-01-01

    Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.

  7. Conjunctive visual search in individuals with and without mental retardation.

    PubMed

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our goal was to determine the effect of decreasing target-distracter disparities on visual search efficiency. Results showed that search rates for the group with mental retardation and the MA-matched comparisons were more negatively affected by decreasing disparities than were those of the CA-matched group. The group with mental retardation and the MA-matched group performed similarly on all tasks. Implications for theory and application are discussed.

  8. A randomized controlled trial comparing 2 interventions for visual field loss with standard occupational therapy during inpatient stroke rehabilitation.

    PubMed

    Mödden, Claudia; Behrens, Marion; Damke, Iris; Eilers, Norbert; Kastrup, Andreas; Hildebrandt, Helmut

    2012-06-01

    Compensatory and restorative treatments have been developed to improve visual field defects after stroke. However, no controlled trials have compared these interventions with standard occupational therapy (OT). A total of 45 stroke participants with visual field defect admitted for inpatient rehabilitation were randomized to restorative computerized training (RT) using computer-based stimulation of border areas of their visual field defects or to a computer-based compensatory therapy (CT) teaching a visual search strategy. OT, in which different compensation strategies were used to train for activities of daily living, served as standard treatment for the active control group. Each treatment group received 15 single sessions of 30 minutes distributed over 3 weeks. The primary outcome measures were visual field expansion for RT, visual search performance for CT, and reading performance for both treatments. Visual conjunction search, alertness, and the Barthel Index were secondary outcomes. Compared with OT, CT resulted in a better visual search performance, and RT did not result in a larger expansion of the visual field. Intragroup pre-post comparisons demonstrated that CT improved all defined outcome parameters and RT several, whereas OT only improved one. CT improved functional deficits after visual field loss compared with standard OT and may be the intervention of choice during inpatient rehabilitation. A larger trial that includes lesion location in the analysis is recommended.

  9. Experimental system for measurement of radiologists' performance by visual search task.

    PubMed

    Maeda, Eriko; Yoshikawa, Takeharu; Nakashima, Ryoichi; Kobayashi, Kazufumi; Yokosawa, Kazuhiko; Hayashi, Naoto; Masutani, Yoshitaka; Yoshioka, Naoki; Akahane, Masaaki; Ohtomo, Kuni

    2013-01-01

    Detective performance of radiologists for "obvious" targets should be evaluated by visual search task instead of ROC analysis, but visual task have not been applied to radiology studies. The aim of this study was to set up an environment that allows visual search task in radiology, to evaluate its feasibility, and to preliminarily investigate the effect of career on the performance. In a darkroom, ten radiologists were asked to answer the type of lesion by pressing buttons, when images without lesions, with bulla, ground-glass nodule, and solid nodule were randomly presented on a display. Differences in accuracy and reaction times depending on board certification were investigated. The visual search task was successfully and feasibly performed. Radiologists were found to have high sensitivity, specificity, positive predictive values and negative predictive values in non-board and board groups. Reaction time was under 1 second for all target types in both groups. Board radiologists were significantly faster in answering for bulla, but there were no significant differences for other targets and values. We developed an experimental system that allows visual search experiment in radiology. Reaction time for detection of bulla was shortened with experience.

  10. Learned face-voice pairings facilitate visual search.

    PubMed

    Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2015-04-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.

  11. Changing viewer perspectives reveals constraints to implicit visual statistical learning.

    PubMed

    Jiang, Yuhong V; Swallow, Khena M

    2014-10-07

    Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.

  12. Visual selective attention and reading efficiency are related in children.

    PubMed

    Casco, C; Tressoldi, P E; Dellantonio, A

    1998-09-01

    We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.

  13. Central and peripheral vision loss differentially affects contextual cueing in visual search.

    PubMed

    Geringswald, Franziska; Pollmann, Stefan

    2015-09-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma. (c) 2015 APA, all rights reserved).

  14. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    ERIC Educational Resources Information Center

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  15. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    ERIC Educational Resources Information Center

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  16. Overt Attention in Contextual Cuing of Visual Search Is Driven by the Attentional Set, but Not by the Predictiveness of Distractors

    ERIC Educational Resources Information Center

    Beesley, Tom; Hanafi, Gunadi; Vadillo, Miguel A.; Shanks, David R.; Livesey, Evan J.

    2018-01-01

    Two experiments examined biases in selective attention during contextual cuing of visual search. When participants were instructed to search for a target of a particular color, overt attention (as measured by the location of fixations) was biased strongly toward distractors presented in that same color. However, when participants searched for…

  17. Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search

    ERIC Educational Resources Information Center

    Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

    2011-01-01

    In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

  18. Pop-out in visual search of moving targets in the archer fish.

    PubMed

    Ben-Tov, Mor; Donchin, Opher; Ben-Shahar, Ohad; Segev, Ronen

    2015-03-10

    Pop-out in visual search reflects the capacity of observers to rapidly detect visual targets independent of the number of distracting objects in the background. Although it may be beneficial to most animals, pop-out behaviour has been observed only in mammals, where neural correlates are found in primary visual cortex as contextually modulated neurons that encode aspects of saliency. Here we show that archer fish can also utilize this important search mechanism by exhibiting pop-out of moving targets. We explore neural correlates of this behaviour and report the presence of contextually modulated neurons in the optic tectum that may constitute the neural substrate for a saliency map. Furthermore, we find that both behaving fish and neural responses exhibit additive responses to multiple visual features. These findings suggest that similar neural computations underlie pop-out behaviour in mammals and fish, and that pop-out may be a universal search mechanism across all vertebrates.

  19. Distractor devaluation requires visual working memory.

    PubMed

    Goolsby, Brian A; Shapiro, Kimron L; Raymond, Jane E

    2009-02-01

    Visual stimuli seen previously as distractors in a visual search task are subsequently evaluated more negatively than those seen as targets. An attentional inhibition account for this distractor-devaluation effect posits that associative links between attentional inhibition and to-be-ignored stimuli are established during search, stored, and then later reinstantiated, implying that distractor devaluation may require visual working memory (WM) resources. To assess this, we measured distractor devaluation with and without a concurrent visual WM load. Participants viewed a memory array, performed a simple search task, evaluated one of the search items (or a novel item), and then viewed a memory test array. Although distractor devaluation was observed with low (and no) WM load, it was absent when WM load was increased. This result supports the notions that active association of current attentional states with stimuli requires WM and that memory for these associations plays a role in affective response.

  20. Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search

    PubMed Central

    Veeraraghavan, Harini; Miller, James V.

    2013-01-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207

  1. Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.

    PubMed

    Veeraraghavan, Harini; Miller, James V

    2014-04-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.

  2. Asymmetries in visual search for conjunctive targets.

    PubMed

    Cohen, A

    1993-08-01

    Asymmetry is demonstrated between conjunctive targets in visual search with no detectable asymmetries between the individual features that compose these targets. Experiment 1 demonstrated this phenomenon for targets composed of color and shape. Experiment 2 and 4 demonstrate this asymmetry for targets composed of size and orientation and for targets composed of contrast level and orientation, respectively. Experiment 3 demonstrates that search rate of individual features cannot predict search rate for conjunctive targets. These results demonstrate the need for 2 levels of representations: one of features and one of conjunction of features. A model related to the modified feature integration theory is proposed to account for these results. The proposed model and other models of visual search are discussed.

  3. FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.

    PubMed

    Mader, Malte; Simon, Ronald; Kurtz, Stefan

    2014-03-31

    A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.

  4. Visually Exploring Transportation Schedules.

    PubMed

    Palomo, Cesar; Guo, Zhan; Silva, Cláudio T; Freire, Juliana

    2016-01-01

    Public transportation schedules are designed by agencies to optimize service quality under multiple constraints. However, real service usually deviates from the plan. Therefore, transportation analysts need to identify, compare and explain both eventual and systemic performance issues that must be addressed so that better timetables can be created. The purely statistical tools commonly used by analysts pose many difficulties due to the large number of attributes at trip- and station-level for planned and real service. Also challenging is the need for models at multiple scales to search for patterns at different times and stations, since analysts do not know exactly where or when relevant patterns might emerge and need to compute statistical summaries for multiple attributes at different granularities. To aid in this analysis, we worked in close collaboration with a transportation expert to design TR-EX, a visual exploration tool developed to identify, inspect and compare spatio-temporal patterns for planned and real transportation service. TR-EX combines two new visual encodings inspired by Marey's Train Schedule: Trips Explorer for trip-level analysis of frequency, deviation and speed; and Stops Explorer for station-level study of delay, wait time, reliability and performance deficiencies such as bunching. To tackle overplotting and to provide a robust representation for a large numbers of trips and stops at multiple scales, the system supports variable kernel bandwidths to achieve the level of detail required by users for different tasks. We justify our design decisions based on specific analysis needs of transportation analysts. We provide anecdotal evidence of the efficacy of TR-EX through a series of case studies that explore NYC subway service, which illustrate how TR-EX can be used to confirm hypotheses and derive new insights through visual exploration.

  5. Evolutionary relevance facilitates visual information processing.

    PubMed

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  6. When do I quit? The search termination problem in visual search.

    PubMed

    Wolfe, Jeremy M

    2012-01-01

    In visual search tasks, observers look for targets in displays or scenes containing distracting, non-target items. Most of the research on this topic has concerned the finding of those targets. Search termination is a less thoroughly studied topic. When is it time to abandon the current search? The answer is fairly straight forward when the one and only target has been found (There are my keys.). The problem is more vexed if nothing has been found (When is it time to stop looking for a weapon at the airport checkpoint?) or when the number of targets is unknown (Have we found all the tumors?). This chapter reviews the development of ideas about quitting time in visual search and offers an outline of our current theory.

  7. Strategic search from long-term memory: an examination of semantic and autobiographical recall.

    PubMed

    Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J

    2014-01-01

    Searching long-term memory is theoretically driven by both directed (search strategies) and random components. In the current study we conducted four experiments evaluating strategic search in semantic and autobiographical memory. Participants were required to generate either exemplars from the category of animals or the names of their friends for several minutes. Self-reported strategies suggested that participants typically relied on visualization strategies for both tasks and were less likely to rely on ordered strategies (e.g., alphabetic search). When participants were instructed to use particular strategies, the visualization strategy resulted in the highest levels of performance and the most efficient search, whereas ordered strategies resulted in the lowest levels of performance and fairly inefficient search. These results are consistent with the notion that retrieval from long-term memory is driven, in part, by search strategies employed by the individual, and that one particularly efficient strategy is to visualize various situational contexts that one has experienced in the past in order to constrain the search and generate the desired information.

  8. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    PubMed

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: evidence from eye movements.

    PubMed

    Hout, Michael C; Goldinger, Stephen D

    2012-02-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent.

  10. A pilot randomized controlled trial comparing effectiveness of prism glasses, visual search training and standard care in hemianopia.

    PubMed

    Rowe, F J; Conroy, E J; Bedson, E; Cwiklinski, E; Drummond, A; García-Fiñana, M; Howard, C; Pollock, A; Shipman, T; Dodridge, C; MacIntosh, C; Johnson, S; Noonan, C; Barton, G; Sackley, C

    2017-10-01

    Pilot trial to compare prism therapy and visual search training, for homonymous hemianopia, to standard care (information only). Prospective, multicentre, parallel, single-blind, three-arm RCT across fifteen UK acute stroke units. Stroke survivors with homonymous hemianopia. Arm a (Fresnel prisms) for minimum 2 hours, 5 days per week over 6 weeks. Arm b (visual search training) for minimum 30 minutes, 5 days per week over 6 weeks. Arm c (standard care-information only). Adult stroke survivors (>18 years), stable hemianopia, visual acuity better than 0.5 logMAR, refractive error within ±5 dioptres, ability to read/understand English and provide consent. Primary outcomes were change in visual field area from baseline to 26 weeks and calculation of sample size for a definitive trial. Secondary measures included Rivermead Mobility Index, Visual Function Questionnaire 25/10, Nottingham Extended Activities of Daily Living, Euro Qual, Short Form-12 questionnaires and Radner reading ability. Measures were post-randomization at baseline and 6, 12 and 26 weeks. Randomization block lists stratified by site and partial/complete hemianopia. Allocations disclosed to patients. Primary outcome assessor blind to treatment allocation. Eighty-seven patients were recruited: 27-Fresnel prisms, 30-visual search training and 30-standard care; 69% male; mean age 69 years (SD 12). At 26 weeks, full results for 24, 24 and 22 patients, respectively, were compared to baseline. Sample size calculation for a definitive trial determined as 269 participants per arm for a 200 degree 2 visual field area change at 90% power. Non-significant relative change in area of visual field was 5%, 8% and 3.5%, respectively, for the three groups. Visual Function Questionnaire responses improved significantly from baseline to 26 weeks with visual search training (60 [SD 19] to 68.4 [SD 20]) compared to Fresnel prisms (68.5 [SD 16.4] to 68.2 [18.4]: 7% difference) and standard care (63.7 [SD 19.4] to 59.8 [SD 22.7]: 10% difference), P=.05. Related adverse events were common with Fresnel prisms (69.2%; typically headaches). No significant change occurred for area of visual field area across arms over follow-up. Visual search training had significant improvement in vision-related quality of life. Prism therapy produced adverse events in 69%. Visual search training results warrant further investigation. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  12. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.

  13. The effects of link format and screen location on visual search of web pages.

    PubMed

    Ling, Jonathan; Van Schaik, Paul

    2004-06-22

    Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.

  14. Why are there eccentricity effects in visual search? Visual and attentional hypotheses.

    PubMed

    Wolfe, J M; O'Neill, P; Bennett, S C

    1998-01-01

    In standard visual search experiments, observers search for a target item among distracting items. The locations of target items are generally random within the display and ignored as a factor in data analysis. Previous work has shown that targets presented near fixation are, in fact, found more efficiently than are targets presented at more peripheral locations. This paper proposes that the primary cause of this "eccentricity effect" (Carrasco, Evert, Chang, & Katz, 1995) is an attentional bias that allocates attention preferentially to central items. The first four experiments dealt with the possibility that visual, and not attentional, factors underlie the eccentricity effect. They showed that the eccentricity effect cannot be accounted for by the peripheral reduction in visual sensitivity, peripheral crowding, or cortical magnification. Experiment 5 tested the attention allocation model and also showed that RT x set size effects can be independent of eccentricity effects. Experiment 6 showed that the effective set size in a search task depends, in part, on the eccentricity of the target because observers search from fixation outward.

  15. The Conceptual Grouping Effect: Categories Matter (and Named Categories Matter More)

    ERIC Educational Resources Information Center

    Lupyan, Gary

    2008-01-01

    Do conceptual categories affect basic visual processing? A conceptual grouping effect for familiar stimuli is reported using a visual search paradigm. Search through conceptually-homogeneous non-targets was faster and more efficient than search through conceptually-heterogeneous non-targets. This effect cannot be attributed to perceptual factors…

  16. Scene analysis for effective visual search in rough three-dimensional-modeling scenes

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hu, Xiaopeng

    2016-11-01

    Visual search is a fundamental technology in the computer vision community. It is difficult to find an object in complex scenes when there exist similar distracters in the background. We propose a target search method in rough three-dimensional-modeling scenes based on a vision salience theory and camera imaging model. We give the definition of salience of objects (or features) and explain the way that salience measurements of objects are calculated. Also, we present one type of search path that guides to the target through salience objects. Along the search path, when the previous objects are localized, the search region of each subsequent object decreases, which is calculated through imaging model and an optimization method. The experimental results indicate that the proposed method is capable of resolving the ambiguities resulting from distracters containing similar visual features with the target, leading to an improvement of search speed by over 50%.

  17. Identifying a "default" visual search mode with operant conditioning.

    PubMed

    Kawahara, Jun-ichiro

    2010-09-01

    The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.

  18. What are the Shapes of Response Time Distributions in Visual Search?

    PubMed Central

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905

  19. Temporal and peripheral extraction of contextual cues from scenes during visual search.

    PubMed

    Koehler, Kathryn; Eckstein, Miguel P

    2017-02-01

    Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.

  20. Does linear separability really matter? Complex visual search is explained by simple search

    PubMed Central

    Vighneshvel, T.; Arun, S. P.

    2013-01-01

    Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822

  1. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers.

  2. Colour Polymorphism Protects Prey Individuals and Populations Against Predation.

    PubMed

    Karpestam, Einat; Merilaita, Sami; Forsman, Anders

    2016-02-23

    Colour pattern polymorphism in animals can influence and be influenced by interactions between predators and prey. However, few studies have examined whether polymorphism is adaptive, and there is no evidence that the co-occurrence of two or more natural prey colour variants can increase survival of populations. Here we show that visual predators that exploit polymorphic prey suffer from reduced performance, and further provide rare evidence in support of the hypothesis that prey colour polymorphism may afford protection against predators for both individuals and populations. This protective effect provides a probable explanation for the longstanding, evolutionary puzzle of the existence of colour polymorphisms. We also propose that this protective effect can provide an adaptive explanation for search image formation in predators rather than search image formation explaining polymorphism.

  3. Colour Polymorphism Protects Prey Individuals and Populations Against Predation

    PubMed Central

    Karpestam, Einat; Merilaita, Sami; Forsman, Anders

    2016-01-01

    Colour pattern polymorphism in animals can influence and be influenced by interactions between predators and prey. However, few studies have examined whether polymorphism is adaptive, and there is no evidence that the co-occurrence of two or more natural prey colour variants can increase survival of populations. Here we show that visual predators that exploit polymorphic prey suffer from reduced performance, and further provide rare evidence in support of the hypothesis that prey colour polymorphism may afford protection against predators for both individuals and populations. This protective effect provides a probable explanation for the longstanding, evolutionary puzzle of the existence of colour polymorphisms. We also propose that this protective effect can provide an adaptive explanation for search image formation in predators rather than search image formation explaining polymorphism. PMID:26902799

  4. The mechanisms underlying the ASD advantage in visual search

    PubMed Central

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S.; Blaser, Erik

    2013-01-01

    A number of studies have demonstrated that individuals with Autism Spectrum Disorders (ASD) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin & Frith, 2005; Simmons, et al., 2009). This “ASD advantage” was first identified in the domain of visual search by Plaisted and colleagues (Plaisted, O’Riordan, & Baron-Cohen, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that - across development and a broad range of symptom severity - individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to ‘enhanced perceptual discrimination’, a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O’Riordan, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn, Muller, & Townsend, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests. PMID:24091470

  5. Development of a computerized visual search test.

    PubMed

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-09-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed to provide an alternative to existing cancellation tests. Data from two pilot studies will be reported that examined some aspects of the test's validity. To date, our assessment of the test shows that it discriminates between healthy and head-injured persons. More research and development work is required to examine task performance changes in relation to task complexity. It is suggested that the conceptual design for the test is worthy of further investigation.

  6. Searching Electronic Health Records for Temporal Patterns in Patient Histories: A Case Study with Microsoft Amalga

    PubMed Central

    Plaisant, Catherine; Lam, Stanley; Shneiderman, Ben; Smith, Mark S.; Roseman, David; Marchand, Greg; Gillam, Michael; Feied, Craig; Handler, Jonathan; Rappaport, Hank

    2008-01-01

    As electronic health records (EHR) become more widespread, they enable clinicians and researchers to pose complex queries that can benefit immediate patient care and deepen understanding of medical treatment and outcomes. However, current query tools make complex temporal queries difficult to pose, and physicians have to rely on computer professionals to specify the queries for them. This paper describes our efforts to develop a novel query tool implemented in a large operational system at the Washington Hospital Center (Microsoft Amalga, formerly known as Azyxxi). We describe our design of the interface to specify temporal patterns and the visual presentation of results, and report on a pilot user study looking for adverse reactions following radiology studies using contrast. PMID:18999158

  7. Use of an augmented-vision device for visual search by patients with tunnel vision

    PubMed Central

    Luo, Gang; Peli, Eli

    2006-01-01

    Purpose To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Methods Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VF) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF: 8º to 11º wide) carried out the search over a 90º×74º area, and nine subjects (VF: 7º to 16º wide) over a 66º×52º area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Results Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in both the larger and smaller area search. When using the device, a significant reduction in search time (28%~74%) was demonstrated by all 3 subjects in the larger area search and by subjects with VF wider than 10º in the smaller area search (average 22%). Directness and the gaze speed accounted for 90% of the variability of search time. Conclusions While performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. As improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks. PMID:16936136

  8. Are visual cue masking and removal techniques equivalent for studying perceptual skills in sport?

    PubMed

    Mecheri, Sami; Gillet, Eric; Thouvarecq, Regis; Leroy, David

    2011-01-01

    The spatial-occlusion paradigm makes use of two techniques (masking and removing visual cues) to provide information about the anticipatory cues used by viewers. The visual scene resulting from the removal technique appears to be incongruous, but the assumed equivalence of these two techniques is spreading. The present study was designed to address this issue by combining eye-movement recording with the two types of occlusion (removal versus masking) in a tennis serve-return task. Response accuracy and decision onsets were analysed. The results indicated that subjects had longer reaction times under the removal condition, with an identical proportion of correct responses. Also, the removal technique caused the subjects to rely on atypical search patterns. Our findings suggest that, when the removal technique was used, viewers were unable to systematically count on stored memories to help them accomplish the interception task. The persistent failure to question some of the assumptions about the removal technique in applied visual research is highlighted, and suggestions for continued use of the masking technique are advanced.

  9. Common neural substrates for visual working memory and attention.

    PubMed

    Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J

    2007-06-01

    Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.

  10. Serial vs. parallel models of attention in visual search: accounting for benchmark RT-distributions.

    PubMed

    Moran, Rani; Zehetleitner, Michael; Liesefeld, Heinrich René; Müller, Hermann J; Usher, Marius

    2016-10-01

    Visual search is central to the investigation of selective visual attention. Classical theories propose that items are identified by serially deploying focal attention to their locations. While this accounts for set-size effects over a continuum of task difficulties, it has been suggested that parallel models can account for such effects equally well. We compared the serial Competitive Guided Search model with a parallel model in their ability to account for RT distributions and error rates from a large visual search data-set featuring three classical search tasks: 1) a spatial configuration search (2 vs. 5); 2) a feature-conjunction search; and 3) a unique feature search (Wolfe, Palmer & Horowitz Vision Research, 50(14), 1304-1311, 2010). In the parallel model, each item is represented by a diffusion to two boundaries (target-present/absent); the search corresponds to a parallel race between these diffusors. The parallel model was highly flexible in that it allowed both for a parametric range of capacity-limitation and for set-size adjustments of identification boundaries. Furthermore, a quit unit allowed for a continuum of search-quitting policies when the target is not found, with "single-item inspection" and exhaustive searches comprising its extremes. The serial model was found to be superior to the parallel model, even before penalizing the parallel model for its increased complexity. We discuss the implications of the results and the need for future studies to resolve the debate.

  11. The role of memory for visual search in scenes

    PubMed Central

    Võ, Melissa Le-Hoa; Wolfe, Jeremy M.

    2014-01-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. While a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. PMID:25684693

  12. Beyond the search surface: visual search and attentional engagement.

    PubMed

    Duncan, J; Humphreys, G

    1992-05-01

    Treisman (1991) described a series of visual search studies testing feature integration theory against an alternative (Duncan & Humphreys, 1989) in which feature and conjunction search are basically similar. Here the latter account is noted to have 2 distinct levels: (a) a summary of search findings in terms of stimulus similarities, and (b) a theory of how visual attention is brought to bear on relevant objects. Working at the 1st level, Treisman found that even when similarities were calibrated and controlled, conjunction search was much harder than feature search. The theory, however, can only really be tested at the 2nd level, because the 1st is an approximation. An account of the findings is developed at the 2nd level, based on the 2 processes of input-template matching and spreading suppression. New data show that, when both of these factors are controlled, feature and conjunction search are equally difficult. Possibilities for unification of the alternative views are considered.

  13. Eye movements and the span of the effective stimulus in visual search.

    PubMed

    Bertera, J H; Rayner, K

    2000-04-01

    The span of the effective stimulus during visual search through an unstructured alphanumeric array was investigated by using eye-contingent-display changes while the subjects searched for a target letter. In one condition, a window exposing the search array moved in synchrony with the subjects' eye movements, and the size of the window was varied. Performance reached asymptotic levels when the window was 5 degrees. In another condition, a foveal mask moved in synchrony with each eye movement, and the size of the mask was varied. The foveal mask conditions were much more detrimental to search behavior than the window conditions, indicating the importance of foveal vision during search. The size of the array also influenced performance, but performance reached asymptote for all array sizes tested at the same window size, and the effect of the foveal mask was the same for all array sizes. The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search.

  14. Context matters: the structure of task goals affects accuracy in multiple-target visual search.

    PubMed

    Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R

    2014-05-01

    Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. The role of memory for visual search in scenes.

    PubMed

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. © 2015 New York Academy of Sciences.

  16. Applying visual attention theory to transportation safety research and design: evaluation of alternative automobile rear lighting systems.

    PubMed

    McIntyre, Scott E; Gugerty, Leo

    2014-06-01

    This field experiment takes a novel approach in applying methodologies and theories of visual search to the subject of conspicuity in automobile rear lighting. Traditional rear lighting research has not used the visual search paradigm in experimental design. It is our claim that the visual search design uniquely uncovers visual attention processes operating when drivers search the visual field that current designs fail to capture. This experiment is a validation and extension of previous simulator research on this same topic and demonstrates that detection of red automobile brake lamps will be improved if tail lamps are another color (in this test, amber) rather than the currently mandated red. Results indicate that when drivers miss brake lamp onset in low ambient light, RT and error are reduced in detecting the presence and absence of red brake lamps with multiple lead vehicles when tail lamps are not red compared to current rear lighting which mandates red tail lamps. This performance improvement is attributed to efficient visual processing that automatically segregates tail (amber) and brake (red) lamp colors into distractors and targets respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. The problem of latent attentional capture: Easy visual search conceals capture by task-irrelevant abrupt onsets.

    PubMed

    Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching

    2016-08-01

    Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. The authors propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, the authors show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. The authors argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which used an easy visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. The Problem of Latent Attentional Capture: Easy Visual Search Conceals Capture by Task-Irrelevant Abrupt Onsets

    PubMed Central

    Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching

    2016-01-01

    Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. We propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, we show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. We argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which employed an easy visual search. PMID:26854530

  19. Visual Search by Children with and without ADHD

    ERIC Educational Resources Information Center

    Mullane, Jennifer C.; Klein, Raymond M.

    2008-01-01

    Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a…

  20. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

  1. Visual Search as a Tool for a Quick and Reliable Assessment of Cognitive Functions in Patients with Multiple Sclerosis

    PubMed Central

    Utz, Kathrin S.; Hankeln, Thomas M. A.; Jung, Lena; Lämmer, Alexandra; Waschbisch, Anne; Lee, De-Hyung; Linker, Ralf A.; Schenk, Thomas

    2013-01-01

    Background Despite the high frequency of cognitive impairment in multiple sclerosis, its assessment has not gained entrance into clinical routine yet, due to lack of time-saving and suitable tests for patients with multiple sclerosis. Objective The aim of the study was to compare the paradigm of visual search with neuropsychological standard tests, in order to identify the test that discriminates best between patients with multiple sclerosis and healthy individuals concerning cognitive functions, without being susceptible to practice effects. Methods Patients with relapsing remitting multiple sclerosis (n = 38) and age-and gender-matched healthy individuals (n = 40) were tested with common neuropsychological tests and a computer-based visual search task, whereby a target stimulus has to be detected amongst distracting stimuli on a touch screen. Twenty-eight of the healthy individuals were re-tested in order to determine potential practice effects. Results Mean reaction time reflecting visual attention and movement time indicating motor execution in the visual search task discriminated best between healthy individuals and patients with multiple sclerosis, without practice effects. Conclusions Visual search is a promising instrument for the assessment of cognitive functions and potentially cognitive changes in patients with multiple sclerosis thanks to its good discriminatory power and insusceptibility to practice effects. PMID:24282604

  2. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  3. Changing Perspective: Zooming in and out during Visual Search

    ERIC Educational Resources Information Center

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  4. Why Is Visual Search Superior in Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Joseph, Robert M.; Keehn, Brandon; Connolly, Christine; Wolfe, Jeremy M.; Horowitz, Todd S.

    2009-01-01

    This study investigated the possibility that enhanced memory for rejected distractor locations underlies the superior visual search skills exhibited by individuals with autism spectrum disorder (ASD). We compared the performance of 21 children with ASD and 21 age- and IQ-matched typically developing (TD) children in a standard static search task…

  5. Insights into the Control of Attentional Set in ADHD Using the Attentional Blink Paradigm

    ERIC Educational Resources Information Center

    Mason, Deanna J.; Humphreys, Glyn W.; Kent, Lindsey

    2005-01-01

    Background: Previous work on visual selective attention in Attention Deficit Hyperactivity Disorder (ADHD) has utilised spatial search paradigms. This study compared ADHD to control children on a temporal search task using Rapid Serial Visual Presentation (RSVP). In addition, the effects of irrelevant singleton distractors on search performance…

  6. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  7. Eye movements, visual search and scene memory, in an immersive virtual environment.

    PubMed

    Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  8. Visual Pattern Analysis in Histopathology Images Using Bag of Features

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Caicedo, Juan C.; González, Fabio A.

    This paper presents a framework to analyse visual patterns in a collection of medical images in a two stage procedure. First, a set of representative visual patterns from the image collection is obtained by constructing a visual-word dictionary under a bag-of-features approach. Second, an analysis of the relationships between visual patterns and semantic concepts in the image collection is performed. The most important visual patterns for each semantic concept are identified using correlation analysis. A matrix visualization of the structure and organization of the image collection is generated using a cluster analysis. The experimental evaluation was conducted on a histopathology image collection and results showed clear relationships between visual patterns and semantic concepts, that in addition, are of easy interpretation and understanding.

  9. Dual Target Search is Neither Purely Simultaneous nor Purely Successive.

    PubMed

    Cave, Kyle R; Menneer, Tamaryn; Nomani, Mohammad S; Stroud, Michael J; Donnelly, Nick

    2017-08-31

    Previous research shows that visual search for two different targets is less efficient than search for a single target. Stroud, Menneer, Cave and Donnelly (2012) concluded that two target colours are represented separately based on modeling the fixation patterns. Although those analyses provide evidence for two separate target representations, they do not show whether participants search simultaneously for both targets, or first search for one target and then the other. Some studies suggest that multiple target representations are simultaneously active, while others indicate that search can be voluntarily simultaneous, or switching, or a mixture of both. Stroud et al.'s participants were not explicitly instructed to use any particular strategy. These data were revisited to determine which strategy was employed. Each fixated item was categorised according to whether its colour was more similar to one target or the other. Once an item similar to one target is fixated, the next fixated item is more likely to be similar to that target than the other, showing that at a given moment during search, one target is generally favoured. However, the search for one target is not completed before search for the other begins. Instead, there are often short runs of one or two fixations to distractors similar to one target, with each run followed by a switch to the other target. Thus, the results suggest that one target is more highly weighted than the other at any given time, but not to the extent that search is purely successive.

  10. Effects of emotional and non-emotional cues on visual search in neglect patients: evidence for distinct sources of attentional guidance.

    PubMed

    Lucas, Nadia; Vuilleumier, Patrik

    2008-04-01

    In normal observers, visual search is facilitated for targets with salient attributes. We compared how two different types of cue (expression and colour) may influence search for face targets, in healthy subjects (n=27) and right brain-damaged patients with left spatial neglect (n=13). The target faces were defined by their identity (singleton among a crowd of neutral faces) but could either be neutral (like other faces), or have a different emotional expression (fearful or happy), or a different colour (red-tinted). Healthy subjects were the fastest for detecting the colour-cued targets, but also showed a significant facilitation for emotionally cued targets, relative to neutral faces differing from other distracter faces by identity only. Healthy subjects were also faster overall for target faces located on the left, as compared to the right side of the display. In contrast, neglect patients were slower to detect targets on the left (contralesional) relative to the right (ipsilesional) side. However, they showed the same pattern of cueing effects as healthy subjects on both sides of space; while their best performance was also found for faces cued by colour, they showed a significant advantage for faces cued by expression, relative to the neutral condition. These results indicate that despite impaired attention towards the left hemispace, neglect patients may still show an intact influence of both low-level colour cues and emotional expression cues on attention, suggesting that neural mechanisms responsible for these effects are partly separate from fronto-parietal brain systems controlling spatial attention during search.

  11. Visual working memory simultaneously guides facilitation and inhibition during visual search.

    PubMed

    Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem

    2016-07-01

    During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention.

  12. Acute exercise and aerobic fitness influence selective attention during visual search.

    PubMed

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.

  13. Acute exercise and aerobic fitness influence selective attention during visual search

    PubMed Central

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094

  14. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    PubMed

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  15. The Art Gallery Test: A Preliminary Comparison between Traditional Neuropsychological and Ecological VR-Based Tests.

    PubMed

    Gamito, Pedro; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Rosa, Pedro; Sousa, Tatiana; Maia, Ines; Morais, Diogo; Lopes, Paulo; Brito, Rodrigo

    2017-01-01

    Ecological validity should be the cornerstone of any assessment of cognitive functioning. For this purpose, we have developed a preliminary study to test the Art Gallery Test (AGT) as an alternative to traditional neuropsychological testing. The AGT involves three visual search subtests displayed in a virtual reality (VR) art gallery, designed to assess visual attention within an ecologically valid setting. To evaluate the relation between AGT and standard neuropsychological assessment scales, data were collected on a normative sample of healthy adults ( n = 30). The measures consisted of concurrent paper-and-pencil neuropsychological measures [Montreal Cognitive Assessment (MoCA), Frontal Assessment Battery (FAB), and Color Trails Test (CTT)] along with the outcomes from the three subtests of the AGT. The results showed significant correlations between the AGT subtests describing different visual search exercises strategies with global and specific cognitive measures. Comparative visual search was associated with attention and cognitive flexibility (CTT); whereas visual searches involving pictograms correlated with global cognitive function (MoCA).

  16. Is Posner's "beam" the same as Treisman's "glue"?: On the relation between visual orienting and feature integration theory.

    PubMed

    Briand, K A; Klein, R M

    1987-05-01

    In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.

  17. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2011-01-01

    When observers search for a target object, they incidentally learn the identities and locations of “background” objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays (Hout & Goldinger, 2010). Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory; Wolfe, 2007). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated non-target objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. PMID:21574743

  18. Genome contact map explorer: a platform for the comparison, interactive visualization and analysis of genome contact maps

    PubMed Central

    Kumar, Rajendra; Sobhy, Haitham

    2017-01-01

    Abstract Hi-C experiments generate data in form of large genome contact maps (Hi-C maps). These show that chromosomes are arranged in a hierarchy of three-dimensional compartments. But to understand how these compartments form and by how much they affect genetic processes such as gene regulation, biologists and bioinformaticians need efficient tools to visualize and analyze Hi-C data. However, this is technically challenging because these maps are big. In this paper, we remedied this problem, partly by implementing an efficient file format and developed the genome contact map explorer platform. Apart from tools to process Hi-C data, such as normalization methods and a programmable interface, we made a graphical interface that let users browse, scroll and zoom Hi-C maps to visually search for patterns in the Hi-C data. In the software, it is also possible to browse several maps simultaneously and plot related genomic data. The software is openly accessible to the scientific community. PMID:28973466

  19. Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search

    PubMed Central

    Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.

    2017-01-01

    In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073

  20. In search of the emotional face: anger versus happiness superiority in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot

    2013-08-01

    Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. Exploring What’s Missing: What Do Target Absent Trials Reveal About Autism Search Superiority?

    PubMed Central

    Keehn, Brandon; Joseph, Robert M.

    2016-01-01

    We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of discrimination or selection. Rather, consistent with prior ASD research, group differences were mainly the effect of faster performance on target-absent trials. Eye-tracking revealed a lack of left-visual-field search asymmetry in ASD, which may confer an additional advantage when the target is absent. Lastly, ASD symptomatology was positively associated with search superiority, the mechanisms of which may shed light on the atypical brain organization that underlies social-communicative impairment in ASD. PMID:26762114

  2. Visual search and autism symptoms: What young children search for and co-occurring ADHD matter.

    PubMed

    Doherty, Brianna R; Charman, Tony; Johnson, Mark H; Scerif, Gaia; Gliga, Teodora

    2018-05-03

    Superior visual search is one of the most common findings in the autism spectrum disorder (ASD) literature. Here, we ascertain how generalizable these findings are across task and participant characteristics, in light of recent replication failures. We tested 106 3-year-old children at familial risk for ASD, a sample that presents high ASD and ADHD symptoms, and 25 control participants, in three multi-target search conditions: easy exemplar search (look for cats amongst artefacts), difficult exemplar search (look for dogs amongst chairs/tables perceptually similar to dogs), and categorical search (look for animals amongst artefacts). Performance was related to dimensional measures of ASD and ADHD, in agreement with current research domain criteria (RDoC). We found that ASD symptom severity did not associate with enhanced performance in search, but did associate with poorer categorical search in particular, consistent with literature describing impairments in categorical knowledge in ASD. Furthermore, ASD and ADHD symptoms were both associated with more disorganized search paths across all conditions. Thus, ASD traits do not always convey an advantage in visual search; on the contrary, ASD traits may be associated with difficulties in search depending upon the nature of the stimuli (e.g., exemplar vs. categorical search) and the presence of co-occurring symptoms. © 2018 John Wiley & Sons Ltd.

  3. Selective maintenance in visual working memory does not require sustained visual attention.

    PubMed

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M

    2013-08-01

    In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved

  4. Predicting Airport Screening Officers' Visual Search Competency With a Rapid Assessment.

    PubMed

    Mitroff, Stephen R; Ericson, Justin M; Sharpe, Benjamin

    2018-03-01

    Objective The study's objective was to assess a new personnel selection and assessment tool for aviation security screeners. A mobile app was modified to create a tool, and the question was whether it could predict professional screeners' on-job performance. Background A variety of professions (airport security, radiology, the military, etc.) rely on visual search performance-being able to detect targets. Given the importance of such professions, it is necessary to maximize performance, and one means to do so is to select individuals who excel at visual search. A critical question is whether it is possible to predict search competency within a professional search environment. Method Professional searchers from the USA Transportation Security Administration (TSA) completed a rapid assessment on a tablet-based X-ray simulator (XRAY Screener, derived from the mobile technology app Airport Scanner; Kedlin Company). The assessment contained 72 trials that were simulated X-ray images of bags. Participants searched for prohibited items and tapped on them with their finger. Results Performance on the assessment significantly related to on-job performance measures for the TSA officers such that those who were better XRAY Screener performers were both more accurate and faster at the actual airport checkpoint. Conclusion XRAY Screener successfully predicted on-job performance for professional aviation security officers. While questions remain about the underlying cognitive mechanisms, this quick assessment was found to significantly predict on-job success for a task that relies on visual search performance. Application It may be possible to quickly assess an individual's visual search competency, which could help organizations select new hires and assess their current workforce.

  5. The effect of spectral filters on visual search in stroke patients.

    PubMed

    Beasley, Ian G; Davies, Leon N

    2013-01-01

    Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.

  6. Implicit short- and long-term memory direct our gaze in visual search.

    PubMed

    Kruijne, Wouter; Meeter, Martijn

    2016-04-01

    Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing.

  7. Anatomical constraints on attention: Hemifield independence is a signature of multifocal spatial selection

    PubMed Central

    Alvarez, George A; Gill, Jonathan; Cavanagh, Patrick

    2012-01-01

    Previous studies have shown independent attentional selection of targets in the left and right visual hemifields during attentional tracking (Alvarez & Cavanagh, 2005) but not during a visual search (Luck, Hillyard, Mangun, & Gazzaniga, 1989). Here we tested whether multifocal spatial attention is the critical process that operates independently in the two hemifields. It is explicitly required in tracking (attend to a subset of object locations, suppress the others) but not in the standard visual search task (where all items are potential targets). We used a modified visual search task in which observers searched for a target within a subset of display items, where the subset was selected based on location (Experiments 1 and 3A) or based on a salient feature difference (Experiments 2 and 3B). The results show hemifield independence in this subset visual search task with location-based selection but not with feature-based selection; this effect cannot be explained by general difficulty (Experiment 4). Combined, these findings suggest that hemifield independence is a signature of multifocal spatial attention and highlight the need for cognitive and neural theories of attention to account for anatomical constraints on selection mechanisms. PMID:22637710

  8. Effects of light touch on postural sway and visual search accuracy: A test of functional integration and resource competition hypotheses.

    PubMed

    Chen, Fu-Chen; Chen, Hsin-Lin; Tu, Jui-Hung; Tsai, Chia-Liang

    2015-09-01

    People often multi-task in their daily life. However, the mechanisms for the interaction between simultaneous postural and non-postural tasks have been controversial over the years. The present study investigated the effects of light digital touch on both postural sway and visual search accuracy for the purpose of assessing two hypotheses (functional integration and resource competition), which may explain the interaction between postural sway and the performance of a non-postural task. Participants (n=42, 20 male and 22 female) were asked to inspect a blank sheet of paper or visually search for target letters in a text block while a fingertip was in light contact with a stable surface (light touch, LT), or with both arms hanging at the sides of the body (no touch, NT). The results showed significant main effects of LT on reducing the magnitude of postural sway as well as enhancing visual search accuracy compared with the NT condition. The findings support the hypothesis of function integration, demonstrating that the modulation of postural sway can be modulated to improve the performance of a visual search task. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  10. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  11. Brief Report: Eye Movements during Visual Search Tasks Indicate Enhanced Stimulus Discriminability in Subjects with PDD

    ERIC Educational Resources Information Center

    Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace

    2008-01-01

    Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…

  12. Exploring What's Missing: What Do Target Absent Trials Reveal about Autism Search Superiority?

    ERIC Educational Resources Information Center

    Keehn, Brandon; Joseph, Robert M.

    2016-01-01

    We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of…

  13. Long-Term Priming of Visual Search Prevails against the Passage of Time and Counteracting Instructions

    ERIC Educational Resources Information Center

    Kruijne, Wouter; Meeter, Martijn

    2016-01-01

    Studies on "intertrial priming" have shown that in visual search experiments, the preceding trial automatically affects search performance: facilitating it when the target features repeat and giving rise to switch costs when they change--so-called (short-term) intertrial priming. These effects also occur at longer time scales: When 1 of…

  14. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  15. Bottom-Up Guidance in Visual Search for Conjunctions

    ERIC Educational Resources Information Center

    Proulx, Michael J.

    2007-01-01

    Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and…

  16. The Mechanisms Underlying the ASD Advantage in Visual Search.

    PubMed

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik

    2016-05-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests.

  17. Casual Video Games as Training Tools for Attentional Processes in Everyday Life.

    PubMed

    Stroud, Michael J; Whitbourne, Susan Krauss

    2015-11-01

    Three experiments examined the attentional components of the popular match-3 casual video game, Bejeweled Blitz (BJB). Attentionally demanding, BJB is highly popular among adults, particularly those in middle and later adulthood. In experiment 1, 54 older adults (Mage = 70.57) and 33 younger adults (Mage = 19.82) played 20 rounds of BJB, and completed online tasks measuring reaction time, simple visual search, and conjunction visual search. Prior experience significantly predicted BJB scores for younger adults, but for older adults, both prior experience and simple visual search task scores predicted BJB performance. Experiment 2 tested whether BJB practice alone would result in a carryover benefit to a visual search task in a sample of 58 young adults (Mage = 19.57) who completed 0, 10, or 30 rounds of BJB followed by a BJB-like visual search task with targets present or absent. Reaction times were significantly faster for participants who completed 30 but not 10 rounds of BJB compared with the search task only. This benefit was evident when targets were both present and absent, suggesting that playing BJB improves not only target detection, but also the ability to quit search effectively. Experiment 3 tested whether the attentional benefit in experiment 2 would apply to non-BJB stimuli. The results revealed a similar numerical but not significant trend. Taken together, the findings suggest there are benefits of casual video game playing to attention and relevant everyday skills, and that these games may have potential value as training tools.

  18. Eye movements during information processing tasks: individual differences and cultural effects.

    PubMed

    Rayner, Keith; Li, Xingshan; Williams, Carrick C; Cave, Kyle R; Well, Arnold D

    2007-09-01

    The eye movements of native English speakers, native Chinese speakers, and bilingual Chinese/English speakers who were either born in China (and moved to the US at an early age) or in the US were recorded during six tasks: (1) reading, (2) face processing, (3) scene perception, (4) visual search, (5) counting Chinese characters in a passage of text, and (6) visual search for Chinese characters. Across the different groups, there was a strong tendency for consistency in eye movement behavior; if fixation durations of a given viewer were long on one task, they tended to be long on other tasks (and the same tended to be true for saccade size). Some tasks, notably reading, did not conform to this pattern. Furthermore, experience with a given writing system had a large impact on fixation durations and saccade lengths. With respect to cultural differences, there was little evidence that Chinese participants spent more time looking at the background information (and, conversely less time looking at the foreground information) than the American participants. Also, Chinese participants' fixations were more numerous and of shorter duration than those of their American counterparts while viewing faces and scenes, and counting Chinese characters in text.

  19. Visual Attention to Pictorial Food Stimuli in Individuals With Night Eating Syndrome: An Eye-Tracking Study.

    PubMed

    Baldofski, Sabrina; Lüthold, Patrick; Sperling, Ingmar; Hilbert, Anja

    2018-03-01

    Night eating syndrome (NES) is characterized by excessive evening and/or nocturnal eating episodes. Studies indicate an attentional bias towards food in other eating disorders. For NES, however, evidence of attentional food processing is lacking. Attention towards food and non-food stimuli was compared using eye-tracking in 19 participants with NES and 19 matched controls without eating disorders during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in initial fixation position or gaze duration. However, a significant orienting bias to food compared to non-food was found within the NES group, but not in controls. A significant attentional maintenance bias to non-food compared to food was found in both groups. Detection times did not differ between groups in the search task. Only in NES, attention to and faster detection of non-food stimuli were related to higher BMI and more evening eating episodes. The results might indicate an attentional approach-avoidance pattern towards food in NES. However, further studies should clarify the implications of attentional mechanisms for the etiology and maintenance of NES. Copyright © 2017. Published by Elsevier Ltd.

  20. The Reactome pathway Knowledgebase

    PubMed Central

    Fabregat, Antonio; Sidiropoulos, Konstantinos; Garapati, Phani; Gillespie, Marc; Hausmann, Kerstin; Haw, Robin; Jassal, Bijay; Jupe, Steven; Korninger, Florian; McKay, Sheldon; Matthews, Lisa; May, Bruce; Milacic, Marija; Rothfels, Karen; Shamovsky, Veronica; Webber, Marissa; Weiser, Joel; Williams, Mark; Wu, Guanming; Stein, Lincoln; Hermjakob, Henning; D'Eustachio, Peter

    2016-01-01

    The Reactome Knowledgebase (www.reactome.org) provides molecular details of signal transduction, transport, DNA replication, metabolism and other cellular processes as an ordered network of molecular transformations—an extended version of a classic metabolic map, in a single consistent data model. Reactome functions both as an archive of biological processes and as a tool for discovering unexpected functional relationships in data such as gene expression pattern surveys or somatic mutation catalogues from tumour cells. Over the last two years we redeveloped major components of the Reactome web interface to improve usability, responsiveness and data visualization. A new pathway diagram viewer provides a faster, clearer interface and smooth zooming from the entire reaction network to the details of individual reactions. Tool performance for analysis of user datasets has been substantially improved, now generating detailed results for genome-wide expression datasets within seconds. The analysis module can now be accessed through a RESTFul interface, facilitating its inclusion in third party applications. A new overview module allows the visualization of analysis results on a genome-wide Reactome pathway hierarchy using a single screen page. The search interface now provides auto-completion as well as a faceted search to narrow result lists efficiently. PMID:26656494

  1. Enhancing long-term memory with stimulation tunes visual attention in one trial.

    PubMed

    Reinhart, Robert M G; Woodman, Geoffrey F

    2015-01-13

    Scientists have long proposed that memory representations control the mechanisms of attention that focus processing on the task-relevant objects in our visual field. Modern theories specifically propose that we rely on working memory to store the object representations that provide top-down control over attentional selection. Here, we show that the tuning of perceptual attention can be sharply accelerated after 20 min of noninvasive brain stimulation over medial-frontal cortex. Contrary to prevailing theories of attention, these improvements did not appear to be caused by changes in the nature of the working memory representations of the search targets. Instead, improvements in attentional tuning were accompanied by changes in an electrophysiological signal hypothesized to index long-term memory. We found that this pattern of effects was reliably observed when we stimulated medial-frontal cortex, but when we stimulated posterior parietal cortex, we found that stimulation directly affected the perceptual processing of the search array elements, not the memory representations providing top-down control. Our findings appear to challenge dominant theories of attention by demonstrating that changes in the storage of target representations in long-term memory may underlie rapid changes in the efficiency with which humans can find targets in arrays of objects.

  2. Image-based query-by-example for big databases of galaxy images

    NASA Astrophysics Data System (ADS)

    Shamir, Lior; Kuminski, Evan

    2017-01-01

    Very large astronomical databases containing millions or even billions of galaxy images have been becoming increasingly important tools in astronomy research. However, in many cases the very large size makes it more difficult to analyze these data manually, reinforcing the need for computer algorithms that can automate the data analysis process. An example of such task is the identification of galaxies of a certain morphology of interest. For instance, if a rare galaxy is identified it is reasonable to expect that more galaxies of similar morphology exist in the database, but it is virtually impossible to manually search these databases to identify such galaxies. Here we describe computer vision and pattern recognition methodology that receives a galaxy image as an input, and searches automatically a large dataset of galaxies to return a list of galaxies that are visually similar to the query galaxy. The returned list is not necessarily complete or clean, but it provides a substantial reduction of the original database into a smaller dataset, in which the frequency of objects visually similar to the query galaxy is much higher. Experimental results show that the algorithm can identify rare galaxies such as ring galaxies among datasets of 10,000 astronomical objects.

  3. Mobile Visual Search Based on Histogram Matching and Zone Weight Learning

    NASA Astrophysics Data System (ADS)

    Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong

    2018-01-01

    In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.

  4. Computational Methods for Tracking, Quantitative Assessment, and Visualization of C. elegans Locomotory Behavior

    PubMed Central

    Moy, Kyle; Li, Weiyu; Tran, Huu Phuoc; Simonis, Valerie; Story, Evan; Brandon, Christopher; Furst, Jacob; Raicu, Daniela; Kim, Hongkyun

    2015-01-01

    The nematode Caenorhabditis elegans provides a unique opportunity to interrogate the neural basis of behavior at single neuron resolution. In C. elegans, neural circuits that control behaviors can be formulated based on its complete neural connection map, and easily assessed by applying advanced genetic tools that allow for modulation in the activity of specific neurons. Importantly, C. elegans exhibits several elaborate behaviors that can be empirically quantified and analyzed, thus providing a means to assess the contribution of specific neural circuits to behavioral output. Particularly, locomotory behavior can be recorded and analyzed with computational and mathematical tools. Here, we describe a robust single worm-tracking system, which is based on the open-source Python programming language, and an analysis system, which implements path-related algorithms. Our tracking system was designed to accommodate worms that explore a large area with frequent turns and reversals at high speeds. As a proof of principle, we used our tracker to record the movements of wild-type animals that were freshly removed from abundant bacterial food, and determined how wild-type animals change locomotory behavior over a long period of time. Consistent with previous findings, we observed that wild-type animals show a transition from area-restricted local search to global search over time. Intriguingly, we found that wild-type animals initially exhibit short, random movements interrupted by infrequent long trajectories. This movement pattern often coincides with local/global search behavior, and visually resembles Lévy flight search, a search behavior conserved across species. Our mathematical analysis showed that while most of the animals exhibited Brownian walks, approximately 20% of the animals exhibited Lévy flights, indicating that C. elegans can use Lévy flights for efficient food search. In summary, our tracker and analysis software will help analyze the neural basis of the alteration and transition of C. elegans locomotory behavior in a food-deprived condition. PMID:26713869

  5. Effect of display size on visual attention.

    PubMed

    Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao

    2011-06-01

    Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.

  6. Visualization of Pulsar Search Data

    NASA Astrophysics Data System (ADS)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  7. Running the figure to the ground: figure-ground segmentation during visual search.

    PubMed

    Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel

    2014-04-01

    We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Location cue validity affects inhibition of return of visual processing.

    PubMed

    Wright, R D; Richard, C M

    2000-01-01

    Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.

  9. FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research

    PubMed Central

    2014-01-01

    Background A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. Results We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. Conclusions The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle. PMID:24684958

  10. Geovisualization to support the exploration of large health and demographic survey data

    PubMed Central

    Koua, Etien L; Kraak, Menno-Jan

    2004-01-01

    Background Survey data are increasingly abundant from many international projects and national statistics. They are generally comprehensive and cover local, regional as well as national levels census in many domains including health, demography, human development, and economy. These surveys result in several hundred indicators. Geographical analysis of such large amount of data is often a difficult task and searching for patterns is particularly a difficult challenge. Geovisualization research is increasingly dealing with the exploration of patterns and relationships in such large datasets for understanding underlying geographical processes. One of the attempts has been to use Artificial Neural Networks as a technology especially useful in situations where the numbers are vast and the relationships are often unclear or even hidden. Results We investigate ways to integrate computational analysis based on a Self-Organizing Map neural network, with visual representations of derived structures and patterns in a framework for exploratory visualization to support visual data mining and knowledge discovery. The framework suggests ways to explore the general structure of the dataset in its multidimensional space in order to provide clues for further exploration of correlations and relationships. Conclusion In this paper, the proposed framework is used to explore a demographic and health survey data. Several graphical representations (information spaces) are used to depict the general structure and clustering of the data and get insight about the relationships among the different variables. Detail exploration of correlations and relationships among the attributes is provided. Results of the analysis are also presented in maps and other graphics. PMID:15180898

  11. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  12. Attentional asymmetry between visual hemifields is related to habitual direction of reading and its implications for debate on cause and effects of dyslexia.

    PubMed

    Kermani, Mojtaba; Verghese, Ashika; Vidyasagar, Trichur R

    2018-02-01

    A major controversy regarding dyslexia is whether any of the many visual and phonological deficits found to be correlated with reading difficulty cause the impairment or result from the reduced amount of reading done by dyslexics. We studied this question by comparing a visual capacity in the left and right visual hemifields in people habitually reading scripts written right-to-left or left-to-right. Selective visual attention is necessary for efficient visual search and also for the sequential recognition of letters in words. Because such attentional allocation during reading depends on the direction in which one is reading, asymmetries in search efficiency may reflect biases arising from the habitual direction of reading. We studied this by examining search performance in three cohorts: (a) left-to-right readers who read English fluently; (b) right-to-left readers fluent in reading Farsi but not any left-to-right script; and (c) bilingual readers fluent in English and in Farsi, Arabic, or Hebrew. Left-to-right readers showed better search performance in the right hemifield and right-to-left readers in the left hemifield, but bilingual readers showed no such asymmetries. Thus, reading experience biases search performance in the direction of reading, which has implications for the cause and effect relationships between reading and cognitive functions. Copyright © 2017 John Wiley & Sons, Ltd.

  13. A computational model of visual marking using an inter-connected network of spiking neurons: the spiking search over time & space model (sSoTS).

    PubMed

    Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo

    2006-01-01

    In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.

  14. Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment

    PubMed Central

    Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

  15. Memory for found targets interferes with subsequent performance in multiple-target visual search.

    PubMed

    Cain, Matthew S; Mitroff, Stephen R

    2013-10-01

    Multiple-target visual searches--when more than 1 target can appear in a given search display--are commonplace in radiology, airport security screening, and the military. Whereas 1 target is often found accurately, additional targets are more likely to be missed in multiple-target searches. To better understand this decrement in 2nd-target detection, here we examined 2 potential forms of interference that can arise from finding a 1st target: interference from the perceptual salience of the 1st target (a now highly relevant distractor in a known location) and interference from a newly created memory representation for the 1st target. Here, we found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy. However, replacing found targets with random distractor items did not improve subsequent search accuracy. Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory. Collectively, the current experiments suggest that the working memory load of a found target has a larger effect on subsequent search accuracy than does its perceptual salience. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  16. Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm.

    PubMed

    Attar, Nada; Schneps, Matthew H; Pomplun, Marc

    2016-10-01

    An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process.

  17. How visual working memory contents influence priming of visual attention.

    PubMed

    Carlisle, Nancy B; Kristjánsson, Árni

    2017-04-12

    Recent evidence shows that when the contents of visual working memory overlap with targets and distractors in a pop-out search task, intertrial priming is inhibited (Kristjánsson, Sævarsson & Driver, Psychon Bull Rev 20(3):514-521, 2013, Experiment 2, Psychonomic Bulletin and Review). This may reflect an interesting interaction between implicit short-term memory-thought to underlie intertrial priming-and explicit visual working memory. Evidence from a non-pop-out search task suggests that it may specifically be holding distractors in visual working memory that disrupts intertrial priming (Cunningham & Egeth, Psychol Sci 27(4):476-485, 2016, Experiment 2, Psychological Science). We examined whether the inhibition of priming depends on whether feature values in visual working memory overlap with targets or distractors in the pop-out search, and we found that the inhibition of priming resulted from holding distractors in visual working memory. These results are consistent with separate mechanisms of target and distractor effects in intertrial priming, and support the notion that the impact of implicit short-term memory and explicit visual working memory can interact when each provides conflicting attentional signals.

  18. Visual search by chimpanzees (Pan): assessment of controlling relations.

    PubMed Central

    Tomonaga, M

    1995-01-01

    Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449

  19. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  20. The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search

    ERIC Educational Resources Information Center

    Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony

    2011-01-01

    The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…

  1. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  2. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    ERIC Educational Resources Information Center

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  3. Flexible Feature-Based Inhibition in Visual Search Mediates Magnified Impairments of Selection: Evidence from Carry-Over Effects under Dynamic Preview-Search Conditions

    ERIC Educational Resources Information Center

    Andrews, Lucy S.; Watson, Derrick G.; Humphreys, Glyn W.; Braithwaite, Jason J.

    2011-01-01

    Evidence for inhibitory processes in visual search comes from studies using preview conditions, where responses to new targets are delayed if they carry a featural attribute belonging to the old distractor items that are currently being ignored--the negative carry-over effect (Braithwaite, Humphreys, & Hodsoll, 2003). We examined whether…

  4. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  5. Electrophysiological evidence for parallel and serial processing during visual search.

    PubMed

    Luck, S J; Hillyard, S A

    1990-12-01

    Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.

  6. Visual attention in a complex search task differs between honeybees and bumblebees.

    PubMed

    Morawetz, Linde; Spaethe, Johannes

    2012-07-15

    Mechanisms of spatial attention are used when the amount of gathered information exceeds processing capacity. Such mechanisms have been proposed in bees, but have not yet been experimentally demonstrated. We provide evidence that selective attention influences the foraging performance of two social bee species, the honeybee Apis mellifera and the bumblebee Bombus terrestris. Visual search tasks, originally developed for application in human psychology, were adapted for behavioural experiments on bees. We examined the impact of distracting visual information on search performance, which we measured as error rate and decision time. We found that bumblebees were significantly less affected by distracting objects than honeybees. Based on the results, we conclude that the search mechanism in honeybees is serial like, whereas in bumblebees it shows the characteristics of a restricted parallel-like search. Furthermore, the bees differed in their strategy to solve the speed-accuracy trade-off. Whereas bumblebees displayed slow but correct decision-making, honeybees exhibited fast and inaccurate decision-making. We propose two neuronal mechanisms of visual information processing that account for the different responses between honeybees and bumblebees, and we correlate species-specific features of the search behaviour to differences in habitat and life history.

  7. Enhancing visual search abilities of people with intellectual disabilities.

    PubMed

    Li-Tsang, Cecilia W P; Wong, Jackson K K

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.

  8. Not just a light fingertip touch: A facilitation of functional integration between body sway and visual search in older adults.

    PubMed

    Chen, Fu-Chen; Chu, Chia-Hua; Pan, Chien-Yu; Tsai, Chia-Liang

    2018-05-01

    Prior studies demonstrated that, compared to no fingertip touch (NT), a reduction in body sway resulting from the effects of light fingertip touch (LT) facilitates the performance of visual search, buttressing the concept of functional integration. However, previous findings may be confounded by different arm postures required between the NT and LT conditions. Furthermore, in older adults, how LT influences the interactions between body sway and visual search has not been established. (1) Are LT effects valid after excluding the influences of different upper limb configurations? (2) Is functional integration is feasible for older adults? Twenty-two young (age = 21.3 ± 2.0) and 22 older adults (age = 71.8 ± 4.1) were recruited. Participants performed visual inspection and visual searches under NT and LT conditions. The older group significantly reduced AP sway (p < 0.05) in LT compared to NT conditions, of which the LT effects on postural adaptation were more remarkable in older than young adults (p < 0.05). In addition, the older group significantly improved search accuracy (p < 0.05) from the LT to the NT condition, and these effects were equivalent between groups. After controlling for postural configurations, the results demonstrate that light fingertip touch reduces body sway and concurrently enhances visual search performance in older adults. These findings confirmed the effects of LT on postural adaptation as well as supported functional integration in older adults. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Overt attention in contextual cuing of visual search is driven by the attentional set, but not by the predictiveness of distractors.

    PubMed

    Beesley, Tom; Hanafi, Gunadi; Vadillo, Miguel A; Shanks, David R; Livesey, Evan J

    2018-05-01

    Two experiments examined biases in selective attention during contextual cuing of visual search. When participants were instructed to search for a target of a particular color, overt attention (as measured by the location of fixations) was biased strongly toward distractors presented in that same color. However, when participants searched for targets that could be presented in 1 of 2 possible colors, overt attention was not biased between the different distractors, regardless of whether these distractors predicted the location of the target (repeating) or did not (randomly arranged). These data suggest that selective attention in visual search is guided only by the demands of the target detection task (the attentional set) and not by the predictive validity of the distractor elements. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Graphical Representations of Electronic Search Patterns.

    ERIC Educational Resources Information Center

    Lin, Xia; And Others

    1991-01-01

    Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…

  11. DMT-TAFM: a data mining tool for technical analysis of futures market

    NASA Astrophysics Data System (ADS)

    Stepanov, Vladimir; Sathaye, Archana

    2002-03-01

    Technical analysis of financial markets describes many patterns of market behavior. For practical use, all these descriptions need to be adjusted for each particular trading session. In this paper, we develop a data mining tool for technical analysis of the futures markets (DMT-TAFM), which dynamically generates rules based on the notion of the price pattern similarity. The tool consists of three main components. The first component provides visualization of data series on a chart with different ranges, scales, and chart sizes and types. The second component constructs pattern descriptions using sets of polynomials. The third component specifies the training set for mining, defines the similarity notion, and searches for a set of similar patterns. DMT-TAFM is useful to prepare the data, and then reveal and systemize statistical information about similar patterns found in any type of historical price series. We performed experiments with our tool on three decades of trading data fro hundred types of futures. Our results for this data set shows that, we can prove or disprove many well-known patterns based on real data, as well as reveal new ones, and use the set of relatively consistent patterns found during data mining for developing better futures trading strategies.

  12. Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation.

    PubMed

    Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien

    2018-01-11

    Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.

  13. Cognitive search model and a new query paradigm

    NASA Astrophysics Data System (ADS)

    Xu, Zhonghui

    2001-06-01

    This paper proposes a cognitive model in which people begin to search pictures by using semantic content and find a right picture by judging whether its visual content is a proper visualization of the semantics desired. It is essential that human search is not just a process of matching computation on visual feature but rather a process of visualization of the semantic content known. For people to search electronic images in the way as they manually do in the model, we suggest that querying be a semantic-driven process like design. A query-by-design paradigm is prosed in the sense that what you design is what you find. Unlike query-by-example, query-by-design allows users to specify the semantic content through an iterative and incremental interaction process so that a retrieval can start with association and identification of the given semantic content and get refined while further visual cues are available. An experimental image retrieval system, Kuafu, has been under development using the query-by-design paradigm and an iconic language is adopted.

  14. A comparison of visual search strategies of elite and non-elite tennis players through cluster analysis.

    PubMed

    Murray, Nicholas P; Hunfalvay, Melissa

    2017-02-01

    Considerable research has documented that successful performance in interceptive tasks (such as return of serve in tennis) is based on the performers' capability to capture appropriate anticipatory information prior to the flight path of the approaching object. Athletes of higher skill tend to fixate on different locations in the playing environment prior to initiation of a skill than their lesser skilled counterparts. The purpose of this study was to examine visual search behaviour strategies of elite (world ranked) tennis players and non-ranked competitive tennis players (n = 43) utilising cluster analysis. The results of hierarchical (Ward's method) and nonhierarchical (k means) cluster analyses revealed three different clusters. The clustering method distinguished visual behaviour of high, middle-and low-ranked players. Specifically, high-ranked players demonstrated longer mean fixation duration and lower variation of visual search than middle-and low-ranked players. In conclusion, the results demonstrated that cluster analysis is a useful tool for detecting and analysing the areas of interest for use in experimental analysis of expertise and to distinguish visual search variables among participants'.

  15. Eye movements during visual search in patients with glaucoma

    PubMed Central

    2012-01-01

    Background Glaucoma has been shown to lead to disability in many daily tasks including visual search. This study aims to determine whether the saccadic eye movements of people with glaucoma differ from those of people with normal vision, and to investigate the association between eye movements and impaired visual search. Methods Forty patients (mean age: 67 [SD: 9] years) with a range of glaucomatous visual field (VF) defects in both eyes (mean best eye mean deviation [MD]: –5.9 (SD: 5.4) dB) and 40 age-related people with normal vision (mean age: 66 [SD: 10] years) were timed as they searched for a series of target objects in computer displayed photographs of real world scenes. Eye movements were simultaneously recorded using an eye tracker. Average number of saccades per second, average saccade amplitude and average search duration across trials were recorded. These response variables were compared with measurements of VF and contrast sensitivity. Results The average rate of saccades made by the patient group was significantly smaller than the number made by controls during the visual search task (P = 0.02; mean reduction of 5.6% (95% CI: 0.1 to 10.4%). There was no difference in average saccade amplitude between the patients and the controls (P = 0.09). Average number of saccades was weakly correlated with aspects of visual function, with patients with worse contrast sensitivity (PR logCS; Spearman’s rho: 0.42; P = 0.006) and more severe VF defects (best eye MD; Spearman’s rho: 0.34; P = 0.037) tending to make less eye movements during the task. Average detection time in the search task was associated with the average rate of saccades in the patient group (Spearman’s rho = −0.65; P < 0.001) but this was not apparent in the controls. Conclusions The average rate of saccades made during visual search by this group of patients was fewer than those made by people with normal vision of a similar average age. There was wide variability in saccade rate in the patients but there was an association between an increase in this measure and better performance in the search task. Assessment of eye movements in individuals with glaucoma might provide insight into the functional deficits of the disease. PMID:22937814

  16. Deep first formal concept search.

    PubMed

    Zhang, Tao; Li, Hui; Hong, Wenxue; Yuan, Xiamei; Wei, Xinyu

    2014-01-01

    The calculation of formal concepts is a very important part in the theory of formal concept analysis (FCA); however, within the framework of FCA, computing all formal concepts is the main challenge because of its exponential complexity and difficulty in visualizing the calculating process. With the basic idea of Depth First Search, this paper presents a visualization algorithm by the attribute topology of formal context. Limited by the constraints and calculation rules, all concepts are achieved by the visualization global formal concepts searching, based on the topology degenerated with the fixed start and end points, without repetition and omission. This method makes the calculation of formal concepts precise and easy to operate and reflects the integrity of the algorithm, which enables it to be suitable for visualization analysis.

  17. Attention mechanisms in visual search -- an fMRI study.

    PubMed

    Leonards, U; Sunaert, S; Van Hecke, P; Orban, G A

    2000-01-01

    The human visual system is usually confronted with many different objects at a time, with only some of them reaching consciousness. Reaction-time studies have revealed two different strategies by which objects are selected for further processing: an automatic, efficient search process, and a conscious, so-called inefficient search [Treisman, A. (1991). Search, similarity, and integration of features between and within dimensions. Journal of Experimental Psychology: Human Perception and Performance, 17, 652--676; Treisman, A., & Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97--136; Wolfe, J. M. (1996). Visual search. In H. Pashler (Ed.), Attention. London: University College London Press]. Two different theories have been proposed to account for these search processes. Parallel theories presume that both types of search are treated by a single mechanism that is modulated by attentional and computational demands. Serial theories, in contrast, propose that parallel processing may underlie efficient search, but inefficient searching requires an additional serial mechanism, an attentional "spotlight" (Treisman, A., 1991) that successively shifts attention to different locations in the visual field. Using functional magnetic resonance imaging (fMRI), we show that the cerebral networks involved in efficient and inefficient search overlap almost completely. Only the superior frontal region, known to be involved in working memory [Courtney, S. M., Petit, L., Maisog, J. M., Ungerleider, L. G., & Haxby, J. V. (1998). An area specialized for spatial working memory in human frontal cortex. Science, 279, 1347--1351], and distinct from the frontal eye fields, that control spatial shifts of attention, was specifically involved in inefficient search. Activity modulations correlated with subjects' behavior best in the extrastriate cortical areas, where the amount of activity depended on the number of distracting elements in the display. Such a correlation was not observed in the parietal and frontal regions, usually assumed as being involved in spatial attention processing. These results can be interpreted in two ways: the most likely is that visual search does not require serial processing, otherwise we must assume the existence of a serial searchlight that operates in the extrastriate cortex but differs from the visuospatial shifts of attention involving the parietal and frontal regions.

  18. The Efficiency of a Visual Skills Training Program on Visual Search Performance

    PubMed Central

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech

    2015-01-01

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666

  19. The relation between visualization size, grouping, and user performance.

    PubMed

    Gramazio, Connor C; Schloss, Karen B; Laidlaw, David H

    2014-12-01

    In this paper we make the following contributions: (1) we describe how the grouping, quantity, and size of visual marks affects search time based on the results from two experiments; (2) we report how search performance relates to self-reported difficulty in finding the target for different display types; and (3) we present design guidelines based on our findings to facilitate the design of effective visualizations. Both Experiment 1 and 2 asked participants to search for a unique target in colored visualizations to test how the grouping, quantity, and size of marks affects user performance. In Experiment 1, the target square was embedded in a grid of squares and in Experiment 2 the target was a point in a scatterplot. Search performance was faster when colors were spatially grouped than when they were randomly arranged. The quantity of marks had little effect on search time for grouped displays ("pop-out"), but increasing the quantity of marks slowed reaction time for random displays. Regardless of color layout (grouped vs. random), response times were slowest for the smallest mark size and decreased as mark size increased to a point, after which response times plateaued. In addition to these two experiments we also include potential application areas, as well as results from a small case study where we report preliminary findings that size may affect how users infer how visualizations should be used. We conclude with a list of design guidelines that focus on how to best create visualizations based on grouping, quantity, and size of visual marks.

  20. Object based implicit contextual learning: a study of eye movements.

    PubMed

    van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel

    2011-02-01

    Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.

  1. Evaluation of seven hypotheses for metamemory performance in rhesus monkeys

    PubMed Central

    Basile, Benjamin M.; Schroeder, Gabriel R.; Brown, Emily Kathryn; Templer, Victoria L.; Hampton, Robert R.

    2014-01-01

    Knowing the extent to which nonhumans and humans share mechanisms for metacognition will advance our understanding of cognitive evolution and will improve selection of model systems for biomedical research. Some nonhuman species avoid difficult cognitive tests, seek information when ignorant, or otherwise behave in ways consistent with metacognition. There is agreement that some nonhuman animals “succeed” in these metacognitive tasks, but little consensus about the cognitive mechanisms underlying performance. In one paradigm, rhesus monkeys visually searched for hidden food when ignorant of the location of the food, but acted immediately when knowledgeable. This result has been interpreted as evidence that monkeys introspectively monitored their memory to adaptively control information seeking. However, convincing alternative hypotheses have been advanced that might also account for the adaptive pattern of visual searching. We evaluated seven hypotheses using a computerized task in which monkeys chose either to take memory tests immediately or to see the answer again before proceeding to the test. We found no evidence to support the hypotheses of behavioral cue association, rote response learning, expectancy violation, response competition, generalized search strategy, or postural mediation. In contrast, we repeatedly found evidence to support the memory monitoring hypothesis. Monkeys chose to see the answer when memory was poor, either from natural variation or experimental manipulation. We found limited evidence that monkeys also monitored the fluency of memory access. Overall, the evidence indicates that rhesus monkeys can use memory strength as a discriminative cue for information seeking, consistent with introspective monitoring of explicit memory. PMID:25365530

  2. Serial, Covert, Shifts of Attention during Visual Search are Reflected by the Frontal Eye Fields and Correlated with Population Oscillations

    PubMed Central

    Buschman, Timothy J.; Miller, Earl K.

    2009-01-01

    Attention regulates the flood of sensory information into a manageable stream, and so understanding how attention is controlled is central to understanding cognition. Competing theories suggest visual search involves serial and/or parallel allocation of attention, but there is little direct, neural, evidence for either mechanism. Two monkeys were trained to covertly search an array for a target stimulus under visual search (endogenous) and pop-out (exogenous) conditions. Here we present neural evidence in the frontal eye fields (FEF) for serial, covert shifts of attention during search but not pop-out. Furthermore, attention shifts reflected in FEF spiking activity were correlated with 18–34 Hz oscillations in the local field potential, suggesting a ‘clocking’ signal. This provides direct neural evidence that primates can spontaneously adopt a serial search strategy and that these serial covert shifts of attention are directed by the FEF. It also suggests that neuron population oscillations may regulate the timing of cognitive processing. PMID:19679077

  3. Content-based Music Search and Recommendation System

    NASA Astrophysics Data System (ADS)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  4. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization.

    PubMed

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri

    2015-02-19

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.

  5. Evaluation of a visual layering methodology for colour coding control room displays.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2002-07-01

    Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.

  6. Selective Maintenance in Visual Working Memory Does Not Require Sustained Visual Attention

    PubMed Central

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.

    2012-01-01

    In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in VWM. Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. PMID:23067118

  7. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face.

    PubMed

    Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

    2014-01-01

    The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  8. Target-present guessing as a function of target prevalence and accumulated information in visual search.

    PubMed

    Peltier, Chad; Becker, Mark W

    2017-05-01

    Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.

  9. Perceptual load corresponds with factors known to influence visual search

    PubMed Central

    Roper, Zachary J. J.; Cosman, Joshua D.; Vecera, Shaun P.

    2014-01-01

    One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search. PMID:23398258

  10. Components of visual search in childhood-onset schizophrenia and attention-deficit/hyperactivity disorder.

    PubMed

    Karatekin, C; Asarnow, R F

    1998-10-01

    This study tested the hypotheses that visual search impairments in schizophrenia are due to a delay in initiation of search or a slow rate of serial search. We determined the specificity of these impairments by comparing children with schizophrenia to children with attention-deficit hyperactivity disorder (ADHD) and age-matched normal children. The hypotheses were tested within the framework of feature integration theory by administering children tasks tapping parallel and serial search. Search rate was estimated from the slope of the search functions, and duration of the initial stages of search from time to make the first saccade on each trial. As expected, manual response times were elevated in both clinical groups. Contrary to expectation, ADHD, but not schizophrenic, children were delayed in initiation of serial search. Finally, both groups showed a clear dissociation between intact parallel search rates and slowed serial search rates.

  11. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  12. Visual search and urban city driving under the influence of marijuana and alcohol

    DOT National Transportation Integrated Search

    2000-03-01

    The purpose of this study was to empirically determine the separate and combined effects of delta-9-tetrahydrocannabinol (THC) and alcohol on visual search and actual city driving performance. On separate evenings, 16 subjects were given weight-calib...

  13. Top-down dimensional weight set determines the capture of visual attention: evidence from the PCN component.

    PubMed

    Töllner, Thomas; Müller, Hermann J; Zehetleitner, Michael

    2012-07-01

    Visual search for feature singletons is slowed when a task-irrelevant, but more salient distracter singleton is concurrently presented. While there is a consensus that this distracter interference effect can be influenced by internal system settings, it remains controversial at what stage of processing this influence starts to affect visual coding. Advocates of the "stimulus-driven" view maintain that the initial sweep of visual processing is entirely driven by physical stimulus attributes and that top-down settings can bias visual processing only after selection of the most salient item. By contrast, opponents argue that top-down expectancies can alter the initial selection priority, so that focal attention is "not automatically" shifted to the location exhibiting the highest feature contrast. To precisely trace the allocation of focal attention, we analyzed the Posterior-Contralateral-Negativity (PCN) in a task in which the likelihood (expectancy) with which a distracter occurred was systematically varied. Our results show that both high (vs. low) distracter expectancy and experiencing a distracter on the previous trial speed up the timing of the target-elicited PCN. Importantly, there was no distracter-elicited PCN, indicating that participants did not shift attention to the distracter before selecting the target. This pattern unambiguously demonstrates that preattentive vision is top-down modifiable.

  14. Forever young: Visual representations of gender and age in online dating sites for older adults.

    PubMed

    Gewirtz-Meydan, Ateret; Ayalon, Liat

    2017-06-13

    Online dating has become increasingly popular among older adults following broader social media adoption patterns. The current study examined the visual representations of people on 39 dating sites intended for the older population, with a particular focus on the visualization of the intersection between age and gender. All 39 dating sites for older adults were located through the Google search engine. Visual thematic analysis was performed with reference to general, non-age-related signs (e.g., facial expression, skin color), signs of aging (e.g., perceived age, wrinkles), relational features (e.g., proximity between individuals), and additional features such as number of people presented. The visual analysis in the present study revealed a clear intersection between ageism and sexism in the presentation of older adults. The majority of men and women were smiling and had a fair complexion, with light eye color and perceived age of younger than 60. Older women were presented as younger and wore more cosmetics as compared with older men. The present study stresses the social regulation of sexuality, as only heterosexual couples were presented. The narrow representation of older adults and the anti-aging messages portrayed in the pictures convey that love, intimacy, and sexual activity are for older adults who are "forever young."

  15. [Internet search for counseling offers for older adults suffering from visual impairment].

    PubMed

    Himmelsbach, I; Lipinski, J; Putzke, M

    2016-11-01

    Visual impairment is a relevant problem of aging. In many cases promising therapeutic options exist but patients are often left with visual deficits, which require a high degree of individualized counseling. This article analyzed which counseling offers can be found by patients and relatives using simple and routine searching via the internet. Analyses were performed using colloquial search terms in the search engine Google in order to find counseling options for elderly people with visual impairments available via the internet. With this strategy 189 offers for counseling were found, which showed very heterogeneous regional differences in distribution. The counseling options found in the internet commonly address topics such as therapeutic interventions or topics on visual aids corresponding to the professions offering rehabilitation most present in the internet, such as ophthalmologists and opticians. Regarding contents addressing psychosocial and help in daily tasks, self-help and support groups offer the most differentiated and broadest spectrum. Support offers for daily living tasks and psychosocial counseling from social providers were more difficult to find with these search terms despite a high presence in the internet. There are a large number of providers of counseling and consulting for older persons with visual impairment. In order to be found more easily by patients and to be recommended more often by ophthalmologists and general practitioners, the presence of providers in the internet must be improved, especially providers of daily living and psychosocial support offers.

  16. The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes.

    PubMed

    Azizi, Elham; Abel, Larry A; Stainer, Matthew J

    2017-02-01

    Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.

  17. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    PubMed

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. ©Jonathan Demelo, Paul Parsons, Kamran Sedig. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.02.2017.

  18. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE

    PubMed Central

    2017-01-01

    Background Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Objective Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. Methods We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Results Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Conclusions Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. PMID:28153818

  19. Set size manipulations reveal the boundary conditions of perceptual ensemble learning.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-11-01

    Recent evidence suggests that observers can grasp patterns of feature variations in the environment with surprising efficiency. During visual search tasks where all distractors are randomly drawn from a certain distribution rather than all being homogeneous, observers are capable of learning highly complex statistical properties of distractor sets. After only a few trials (learning phase), the statistical properties of distributions - mean, variance and crucially, shape - can be learned, and these representations affect search during a subsequent test phase (Chetverikov, Campana, & Kristjánsson, 2016). To assess the limits of such distribution learning, we varied the information available to observers about the underlying distractor distributions by manipulating set size during the learning phase in two experiments. We found that robust distribution learning only occurred for large set sizes. We also used set size to assess whether the learning of distribution properties makes search more efficient. The results reveal how a certain minimum of information is required for learning to occur, thereby delineating the boundary conditions of learning of statistical variation in the environment. However, the benefits of distribution learning for search efficiency remain unclear. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Patterned light flash evoked short latency activity in the visual system of visually normal and in amblyopic subjects.

    PubMed

    Sjöström, A; Abrahamsson, M

    1994-04-01

    In a previous experimental study on anaesthetized cat it was shown that a short latency (35-40 ms) cortical potential changed polarity due to the presence or absence of a pattern in the flash stimulus. The results suggested one pathway of neuronal activation in the cortex to a pattern that was within the level of resolution and another to patterns that were not. It was implied that a similar difference in impulse transmission to pattern and non-pattern stimuli may be recorded in humans. The present paper describes recordings of the short-latency visual evoked response to varying light flash checkerboard pattern stimuli of high intensity in visually normal and amblyopic children and adults. When stimulating the normal eye a visual evoked response potential with a peak latency between 35 to 40 ms showed a polarity change to patterned compared to non-patterned stimulation. The visual evoked response resolution limit could be correlated to a visual acuity of 0.5 and below. In amblyopic eyes the shift in polarity was recorded at the acuity limit level. The latency of the pattern depending potential was increased in patients with amblyopia compared to normal, but not directly related to amblyopic degree. It is concluded that the short latency, visual evoked response that mainly represents the retino-geniculo-cortical activation may be used to estimate visual resolution below 0.5 in acuity level.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. A deep (learning) dive into visual search behaviour of breast radiologists

    NASA Astrophysics Data System (ADS)

    Mall, Suneeta; Brennan, Patrick C.; Mello-Thoms, Claudia

    2018-03-01

    Visual search, the process of detecting and identifying objects using the eye movements (saccades) and the foveal vision, has been studied for identification of root causes of errors in the interpretation of mammography. The aim of this study is to model visual search behaviour of radiologists and their interpretation of mammograms using deep machine learning approaches. Our model is based on a deep convolutional neural network, a biologically-inspired multilayer perceptron that simulates the visual cortex, and is reinforced with transfer learning techniques. Eye tracking data obtained from 8 radiologists (of varying experience levels in reading mammograms) reviewing 120 two-view digital mammography cases (59 cancers) have been used to train the model, which was pre-trained with the ImageNet dataset for transfer learning. Areas of the mammogram that received direct (foveally fixated), indirect (peripherally fixated) or no (never fixated) visual attention were extracted from radiologists' visual search maps (obtained by a head mounted eye tracking device). These areas, along with the radiologists' assessment (including confidence of the assessment) of suspected malignancy were used to model: 1) Radiologists' decision; 2) Radiologists' confidence on such decision; and 3) The attentional level (i.e. foveal, peripheral or none) obtained by an area of the mammogram. Our results indicate high accuracy and low misclassification in modelling such behaviours.

  2. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.

    PubMed

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-03-09

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.

  3. Electrophysiological evidence that top-down knowledge controls working memory processing for subsequent visual search.

    PubMed

    Kawashima, Tomoya; Matsumoto, Eriko

    2016-03-23

    Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations.

  4. BATSE Gamma-Ray Burst Line Search. IV. Line Candidates from the Visual Search

    NASA Astrophysics Data System (ADS)

    Band, D. L.; Ryder, S.; Ford, L. A.; Matteson, J. L.; Palmer, D. M.; Teegarden, B. J.; Briggs, M. S.; Paciesas, W. S.; Pendleton, G. N.; Preece, R. D.

    1996-02-01

    We evaluate the significance of the line candidates identified by a visual search of burst spectra from BATSE's Spectroscopy Detectors. None of the candidates satisfy our detection criteria: an F-test probability less than 10-4 for a feature in one detector and consistency among the detectors that viewed the burst. Most of the candidates are not very significant and are likely to be fluctuations. Because of the expectation of finding absorption lines, the search was biased toward absorption features. We do not have a quantitative measure of the completeness of the search, which would enable a comparison with previous missions. Therefore, a more objective computerized search has begun.

  5. SATORI: a system for ontology-guided visual exploration of biomedical data repositories.

    PubMed

    Lekschas, Fritz; Gehlenborg, Nils

    2018-04-01

    The ever-increasing number of biomedical datasets provides tremendous opportunities for re-use but current data repositories provide limited means of exploration apart from text-based search. Ontological metadata annotations provide context by semantically relating datasets. Visualizing this rich network of relationships can improve the explorability of large data repositories and help researchers find datasets of interest. We developed SATORI-an integrative search and visual exploration interface for the exploration of biomedical data repositories. The design is informed by a requirements analysis through a series of semi-structured interviews. We evaluated the implementation of SATORI in a field study on a real-world data collection. SATORI enables researchers to seamlessly search, browse and semantically query data repositories via two visualizations that are highly interconnected with a powerful search interface. SATORI is an open-source web application, which is freely available at http://satori.refinery-platform.org and integrated into the Refinery Platform. nils@hms.harvard.edu. Supplementary data are available at Bioinformatics online.

  6. Superior Visual Search and Crowding Abilities Are Not Characteristic of All Individuals on the Autism Spectrum.

    PubMed

    Lindor, Ebony; Rinehart, Nicole; Fielding, Joanne

    2018-05-22

    Individuals with Autism Spectrum Disorder (ASD) often excel on visual search and crowding tasks; however, inconsistent findings suggest that this 'islet of ability' may not be characteristic of the entire spectrum. We examined whether performance on these tasks changed as a function of motor proficiency in children with varying levels of ASD symptomology. Children with high ASD symptomology outperformed all others on complex visual search tasks, but only if their motor skills were rated at, or above, age expectations. For the visual crowding task, children with high ASD symptomology and superior motor skills exhibited enhanced target discrimination, whereas those with high ASD symptomology but poor motor skills experienced deficits. These findings may resolve some of the discrepancies in the literature.

  7. The role of pattern recognition in creative problem solving: a case study in search of new mathematics for biology.

    PubMed

    Hong, Felix T

    2013-09-01

    Rosen classified sciences into two categories: formalizable and unformalizable. Whereas formalizable sciences expressed in terms of mathematical theories were highly valued by Rutherford, Hutchins pointed out that unformalizable parts of soft sciences are of genuine interest and importance. Attempts to build mathematical theories for biology in the past century was met with modest and sporadic successes, and only in simple systems. In this article, a qualitative model of humans' high creativity is presented as a starting point to consider whether the gap between soft and hard sciences is bridgeable. Simonton's chance-configuration theory, which mimics the process of evolution, was modified and improved. By treating problem solving as a process of pattern recognition, the known dichotomy of visual thinking vs. verbal thinking can be recast in terms of analog pattern recognition (non-algorithmic process) and digital pattern recognition (algorithmic process), respectively. Additional concepts commonly encountered in computer science, operations research and artificial intelligence were also invoked: heuristic searching, parallel and sequential processing. The refurbished chance-configuration model is now capable of explaining several long-standing puzzles in human cognition: a) why novel discoveries often came without prior warning, b) why some creators had no ideas about the source of inspiration even after the fact, c) why some creators were consistently luckier than others, and, last but not least, d) why it was so difficult to explain what intuition, inspiration, insight, hunch, serendipity, etc. are all about. The predictive power of the present model was tested by means of resolving Zeno's paradox of Achilles and the Tortoise after one deliberately invoked visual thinking. Additional evidence of its predictive power must await future large-scale field studies. The analysis was further generalized to constructions of scientific theories in general. This approach is in line with Campbell's evolutionary epistemology. Instead of treating science as immutable Natural Laws, which already existed and which were just waiting to be discovered, scientific theories are regarded as humans' mental constructs, which must be invented to reconcile with observed natural phenomena. In this way, the pursuit of science is shifted from diligent and systematic (or random) searching for existing Natural Laws to firing up humans' imagination to comprehend Nature's behavioral pattern. The insights gained in understanding human creativity indicated that new mathematics that is capable of handling effectively parallel processing and human subjectivity is sorely needed. The past classification of formalizability vs. non-formalizability was made in reference to contemporary mathematics. Rosen's conclusion did not preclude future inventions of new biology-friendly mathematics. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Evolution and Optimality of Similar Neural Mechanisms for Perception and Action during Search

    PubMed Central

    Zhang, Sheng; Eckstein, Miguel P.

    2010-01-01

    A prevailing theory proposes that the brain's two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways. PMID:20838589

  9. Playing shooter and driving videogames improves top-down guidance in visual search.

    PubMed

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.

  10. WOVOdat - An online, growing library of worldwide volcanic unrest

    NASA Astrophysics Data System (ADS)

    Newhall, C. G.; Costa, F.; Ratdomopurbo, A.; Venezky, D. Y.; Widiwijayanti, C.; Win, Nang Thin Zar; Tan, K.; Fajiculay, E.

    2017-10-01

    The World Organization of Volcano Observatories (WOVO), with major support from the Earth Observatory of Singapore, is developing a web-accessible database of seismic, geodetic, gas, hydrologic, and other unrest from volcanoes around the world. This database, WOVOdat, is intended for reference during volcanic crises, comparative studies, basic research on pre-eruption processes, teaching, and outreach. Data are already processed to have physical meaning, e.g. earthquake hypocenters rather than voltages or arrival times, and are historical rather than real-time, ranging in age from a few days to several decades. Data from > 900 episodes of unrest covering > 75 volcanoes are already accessible. Users can visualize and compare changes from one episode of unrest or from one volcano to the next. As the database grows more complete, users will be able to analyze patterns of unrest in the same way that epidemiologists study the spatial and temporal patterns and associations among diseases. WOVOdat was opened for station and data visualization in August 2013, and now includes utilities for data downloads and Boolean searches. Many more data sets are being added, as well as utilities interfacing to new applications, e.g., the construction of event trees. For more details, please see www.wovodat.org.

  11. Resource-sharing between internal maintenance and external selection modulates attentional capture by working memory content.

    PubMed

    Kiyonaga, Anastasia; Egner, Tobias

    2014-01-01

    It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter-and facilitation by a matching target-were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed.

  12. Resource-sharing between internal maintenance and external selection modulates attentional capture by working memory content

    PubMed Central

    Kiyonaga, Anastasia; Egner, Tobias

    2014-01-01

    It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter—and facilitation by a matching target—were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed. PMID:25221499

  13. Chemical and visual communication during mate searching in rock shrimp.

    PubMed

    Díaz, Eliecer R; Thiel, Martin

    2004-06-01

    Mate searching in crustaceans depends on different communicational cues, of which chemical and visual cues are most important. Herein we examined the role of chemical and visual communication during mate searching and assessment in the rock shrimp Rhynchocinetes typus. Adult male rock shrimp experience major ontogenetic changes. The terminal molt stages (named "robustus") are dominant and capable of monopolizing females during the mating process. Previous studies had shown that most females preferably mate with robustus males, but how these dominant males and receptive females find each other is uncertain, and is the question we examined herein. In a Y-maze designed to test for the importance of waterborne chemical cues, we observed that females approached the robustus male significantly more often than the typus male. Robustus males, however, were unable to locate receptive females via chemical signals. Using an experimental set-up that allowed testing for the importance of visual cues, we demonstrated that receptive females do not use visual cues to select robustus males, but robustus males use visual cues to find receptive females. Visual cues used by the robustus males were the tumults created by agitated aggregations of subordinate typus males around the receptive females. These results indicate a strong link between sexual communication and the mating system of rock shrimp in which dominant males monopolize receptive females. We found that females and males use different (sex-specific) communicational cues during mate searching and assessment, and that the sexual communication of rock shrimp is similar to that of the American lobster, where females are first attracted to the dominant males by chemical cues emitted by these males. A brief comparison between these two species shows that female behaviors during sexual communication contribute strongly to the outcome of mate searching and assessment.

  14. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Visual Exploratory Search of Relationship Graphs on Smartphones

    PubMed Central

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  16. Distractor ratio and grouping processes in visual conjunction search.

    PubMed

    Poisson, M E; Wilkinson, F

    1992-01-01

    According to feature integration theory, conjunction search is conducted via a serial self-terminating search. However, effects attributed to search processes operating on the entire display may actually reflect search restricted to elements defined by a single feature. In experiment 1 this question is addressed in a reaction-time (RT) paradigm by varying distractor ratios within an array of fixed size. For trials in which the target was present in the array, RT functions were roughly symmetric, the shortest RTs being for extreme distractor ratios, and the longest RTs being for arrays in which there were an equal number of each distractor type. This result is superficially consistent with Zohary and Hochstein's interpretation that subjects search for only one distractor type and are able to switch search strategy from trial to trial. However, negative-trial data from experiment 1 case doubt on this interpretation. In experiment 2 the possible role of 'pop out' and of distractor grouping in visual conjunction search is investigated. Results of experiment 2 suggest that grouping may play a more important role than does distractor ratio, and point to the importance of the spatial layout of the target and of the distractor elements in visual conjunction search. Results of experiment 2 also provide clear evidence that groups of spatially adjacent homogeneous elements may be processed as a unit.

  17. A Parallel Genetic Algorithm to Discover Patterns in Genetic Markers that Indicate Predisposition to Multifactorial Disease

    PubMed Central

    Rausch, Tobias; Thomas, Alun; Camp, Nicola J.; Cannon-Albright, Lisa A.; Facelli, Julio C.

    2008-01-01

    This paper describes a novel algorithm to analyze genetic linkage data using pattern recognition techniques and genetic algorithms (GA). The method allows a search for regions of the chromosome that may contain genetic variations that jointly predispose individuals for a particular disease. The method uses correlation analysis, filtering theory and genetic algorithms (GA) to achieve this goal. Because current genome scans use from hundreds to hundreds of thousands of markers, two versions of the method have been implemented. The first is an exhaustive analysis version that can be used to visualize, explore, and analyze small genetic data sets for two marker correlations; the second is a GA version, which uses a parallel implementation allowing searches of higher-order correlations in large data sets. Results on simulated data sets indicate that the method can be informative in the identification of major disease loci and gene-gene interactions in genome-wide linkage data and that further exploration of these techniques is justified. The results presented for both variants of the method show that it can help genetic epidemiologists to identify promising combinations of genetic factors that might predispose to complex disorders. In particular, the correlation analysis of IBD expression patterns might hint to possible gene-gene interactions and the filtering might be a fruitful approach to distinguish true correlation signals from noise. PMID:18547558

  18. Optimal random Lévy-loop searching: New insights into the searching behaviours of central-place foragers

    NASA Astrophysics Data System (ADS)

    Reynolds, A. M.

    2008-04-01

    A random Lévy-looping model of searching is devised and optimal random Lévy-looping searching strategies are identified for the location of a single target whose position is uncertain. An inverse-square power law distribution of loop lengths is shown to be optimal when the distance between the centre of the search and the target is much shorter than the size of the longest possible loop in the searching pattern. Optimal random Lévy-looping searching patterns have recently been observed in the flight patterns of honeybees (Apis mellifera) when attempting to locate their hive and when searching after a known food source becomes depleted. It is suggested that the searching patterns of desert ants (Cataglyphis) are consistent with the adoption of an optimal Lévy-looping searching strategy.

  19. Eye tracking to evaluate evidence recognition in crime scene investigations.

    PubMed

    Watalingam, Renuka Devi; Richetelli, Nicole; Pelz, Jeff B; Speir, Jacqueline A

    2017-11-01

    Crime scene analysts are the core of criminal investigations; decisions made at the scene greatly affect the speed of analysis and the quality of conclusions, thereby directly impacting the successful resolution of a case. If an examiner fails to recognize the pertinence of an item on scene, the analyst's theory regarding the crime will be limited. Conversely, unselective evidence collection will most likely include irrelevant material, thus increasing a forensic laboratory's backlog and potentially sending the investigation into an unproductive and costly direction. Therefore, it is critical that analysts recognize and properly evaluate forensic evidence that can assess the relative support of differing hypotheses related to event reconstruction. With this in mind, the aim of this study was to determine if quantitative eye tracking data and qualitative reconstruction accuracy could be used to distinguish investigator expertise. In order to assess this, 32 participants were successfully recruited and categorized as experts or trained novices based on their practical experiences and educational backgrounds. Each volunteer then processed a mock crime scene while wearing a mobile eye tracker, wherein visual fixations, durations, search patterns, and reconstruction accuracy were evaluated. The eye tracking data (dwell time and task percentage on areas of interest or AOIs) were compared using Earth Mover's Distance (EMD) and the Needleman-Wunsch (N-W) algorithm, revealing significant group differences for both search duration (EMD), as well as search sequence (N-W). More specifically, experts exhibited greater dissimilarity in search duration, but greater similarity in search sequences than their novice counterparts. In addition to the quantitative visual assessment of examiner variability, each participant's reconstruction skill was assessed using a 22-point binary scoring system, in which significant group differences were detected as a function of total reconstruction accuracy. This result, coupled with the fact that the study failed to detect a significant difference between the groups when evaluating the total time needed to complete the investigation, indicates that experts are more efficient and effective. Finally, the results presented here provide a basis for continued research in the use of eye trackers to assess expertise in complex and distributed environments, including suggestions for future work, and cautions regarding the degree to which visual attention can infer cognitive understanding. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Visual Analytics for Heterogeneous Geoscience Data

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Yu, L.; Zhu, F.; Rilee, M. L.; Kuo, K. S.; Jiang, H.; Yu, H.

    2017-12-01

    Geoscience data obtained from diverse sources have been routinely leveraged by scientists to study various phenomena. The principal data sources include observations and model simulation outputs. These data are characterized by spatiotemporal heterogeneity originated from different instrument design specifications and/or computational model requirements used in data generation processes. Such inherent heterogeneity poses several challenges in exploring and analyzing geoscience data. First, scientists often wish to identify features or patterns co-located among multiple data sources to derive and validate certain hypotheses. Heterogeneous data make it a tedious task to search such features in dissimilar datasets. Second, features of geoscience data are typically multivariate. It is challenging to tackle the high dimensionality of geoscience data and explore the relations among multiple variables in a scalable fashion. Third, there is a lack of transparency in traditional automated approaches, such as feature detection or clustering, in that scientists cannot intuitively interact with their analysis processes and interpret results. To address these issues, we present a new scalable approach that can assist scientists in analyzing voluminous and diverse geoscience data. We expose a high-level query interface that allows users to easily express their customized queries to search features of interest across multiple heterogeneous datasets. For identified features, we develop a visualization interface that enables interactive exploration and analytics in a linked-view manner. Specific visualization techniques such as scatter plots to parallel coordinates are employed in each view to allow users to explore various aspects of features. Different views are linked and refreshed according to user interactions in any individual view. In such a manner, a user can interactively and iteratively gain understanding into the data through a variety of visual analytics operations. We demonstrate with use cases how scientists can combine the query and visualization interfaces to enable a customized workflow facilitating studies using heterogeneous geoscience datasets.

Top