Science.gov

Sample records for visual search patterns

  1. Emotional Devaluation of Distracting Patterns and Faces: A Consequence of Attentional Inhibition during Visual Search?

    ERIC Educational Resources Information Center

    Raymond, Jane E.; Fenske, Mark J.; Westoby, Nikki

    2005-01-01

    Visual search has been studied extensively, yet little is known about how its constituent processes affect subsequent emotional evaluation of searched-for and searched-through items. In 3 experiments, the authors asked observers to locate a colored pattern or tinted face in an array of other patterns or faces. Shortly thereafter, either the target…

  2. Priming cases disturb visual search patterns in screening mammography

    NASA Astrophysics Data System (ADS)

    Lewis, Sarah J.; Reed, Warren M.; Tan, Alvin N. K.; Brennan, Patrick C.; Lee, Warwick; Mello-Thoms, Claudia

    2015-03-01

    Rationale and Objectives: To investigate the effect of inserting obvious cancers into a screening set of mammograms on the visual search of radiologists. Previous research presents conflicting evidence as to the impact of priming in scenarios where prevalence is naturally low, such as in screening mammography. Materials and Methods: An observer performance and eye position analysis study was performed. Four expert breast radiologists were asked to interpret two sets of 40 screening mammograms. The Control Set contained 36 normal and 4 malignant cases (located at case # 9, 14, 25 and 37). The Primed Set contained the same 34 normal and 4 malignant cases (in the same location) plus 2 "primer" malignant cases replacing 2 normal cases (located at positions #20 and 34). Primer cases were defined as lower difficulty cases containing salient malignant features inserted before cases of greater difficulty. Results: Wilcoxon Signed Rank Test indicated no significant differences in sensitivity or specificity between the two sets (P > 0.05). The fixation count in the malignant cases (#25, 37) in the Primed Set after viewing the primer cases (#20, 34) decreased significantly (Z = -2.330, P = 0.020). False-Negatives errors were mostly due to sampling in the Primed Set (75%) in contrast to in the Control Set (25%). Conclusion: The overall performance of radiologists is not affected by the inclusion of obvious cancer cases. However, changes in visual search behavior, as measured by eye-position recording, suggests visual disturbance by the inclusion of priming cases in screening mammography.

  3. Visual search patterns in semantic dementia show paradoxical facilitation of binding processes

    PubMed Central

    Viskontas, Indre V.; Boxer, Adam L.; Fesenko, John; Matlin, Alisa; Heuer, Hilary W.; Mirsky, Jacob; Miller, Bruce L.

    2011-01-01

    While patients with Alzheimer’s disease (AD) show deficits in attention, manifested by inefficient performance on visual search, new visual talents can emerge in patients with frontotemporal lobar degeneration (FTLD), suggesting that, at least in some of the patients, visual attention is spared, if not enhanced. To investigate the underlying mechanisms for visual talent in FTLD (behavioral variant FTD [bvFTD] and semantic dementia [SD]) patients, we measured performance on a visual search paradigm that includes both feature and conjunction search, while simultaneously monitoring saccadic eye movements. AD patients were impaired relative to healthy controls (NC) and FTLD patients on both feature and conjunction search. BvFTD patients showed less accurate performance only on the conjunction search task, but slower response times than NC on all three tasks. In contrast, SD patients were as accurate as controls and had faster response times when faced with the largest number of distracters in the conjunction search task. Measurement of saccades during visual search showed that AD patients explored more of the image, whereas SD patients explored less of the image before making a decision as to whether the target was present. Performance on the conjunction search task positively correlated with gray matter volume in the superior parietal lobe, precuneus, middle frontal gyrus and superior temporal gyrus. These data suggest that despite the presence of extensive temporal lobe degeneration, visual talent in SD may be facilitated by more efficient visual search under distracting conditions due to enhanced function in the dorsal frontoparietal attention network. PMID:21215762

  4. The visual search patterns and hazard responses of experienced and inexperienced motorcycle riders.

    PubMed

    Hosking, Simon G; Liu, Charles C; Bayly, Megan

    2010-01-01

    Hazard perception is a critical skill for road users. In this study, an open-loop motorcycle simulator was used to examine the effects of motorcycle riding and car driving experience on hazard perception and visual scanning patterns. Three groups of participants were tested: experienced motorcycle riders who were experienced drivers (EM-ED), inexperienced riders/experienced drivers (IM-ED), and inexperienced riders/inexperienced drivers (IM-ID). Participants were asked to search for hazards in simulated scenarios, and click a response button when a hazard was identified. The results revealed a significant monotonic decrease in hazard response times as experience increased from IM-ID to IM-ED to EM-ED. Compared to the IM-ID group, both the EM-ED and IM-ED groups exhibited more flexible visual scanning patterns that were sensitive to the presence of hazards. These results point to the potential benefit of training hazard perception and visual scanning in motorcycle riders, as has been successfully demonstrated in previous studies with car drivers. PMID:19887160

  5. Collaboration during visual search.

    PubMed

    Malcolmson, Kelly A; Reynolds, Michael G; Smilek, Daniel

    2007-08-01

    Two experiments examine how collaboration influences visual search performance. Working with a partner or on their own, participants reported whether a target was present or absent in briefly presented search displays. We compared the search performance of individuals working together (collaborative pairs) with the pooled responses of the individuals working alone (nominal pairs). Collaborative pairs were less likely than nominal pairs to correctly detect a target and they were less likely to make false alarms. Signal detection analyses revealed that collaborative pairs were more sensitive to the presence of the target and had a more conservative response bias than the nominal pairs. This pattern was observed even when the presence of another individual was matched across pairs. The results are discussed in the context of task-sharing, social loafing and current theories of visual search. PMID:17972737

  6. Reconsidering Visual Search

    PubMed Central

    2015-01-01

    The visual search paradigm has had an enormous impact in many fields. A theme running through this literature has been the distinction between preattentive and attentive processing, which I refer to as the two-stage assumption. Under this assumption, slopes of set-size and response time are used to determine whether attention is needed for a given task or not. Even though a lot of findings question this two-stage assumption, it still has enormous influence, determining decisions on whether papers are published or research funded. The results described here show that the two-stage assumption leads to very different conclusions about the operation of attention for identical search tasks based only on changes in response (presence/absence versus Go/No-go responses). Slopes are therefore an ambiguous measure of attentional involvement. Overall, the results suggest that the two-stage model cannot explain all findings on visual search, and they highlight how slopes of response time and set-size should only be used with caution. PMID:27551357

  7. Parallel Processing in Visual Search Asymmetry

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2004-01-01

    The difficulty of visual search may depend on assignment of the same visual elements as targets and distractors-search asymmetry. Easy C-in-O searches and difficult O-in-C searches are often associated with parallel and serial search, respectively. Here, the time course of visual search was measured for both tasks with speed-accuracy methods. The…

  8. Searching for inefficiency in visual search.

    PubMed

    Christie, Gregory J; Livingstone, Ashley C; McDonald, John J

    2015-01-01

    The time required to find an object of interest in the visual field often increases as a function of the number of items present. This increase or inefficiency was originally interpreted as evidence for the serial allocation of attention to potential target items, but controversy has ensued for decades. We investigated this issue by recording ERPs from humans searching for a target in displays containing several differently colored items. Search inefficiency was ascribed not to serial search but to the time required to selectively process the target once found. Additionally, less time was required for the target to "pop out" from the rest of the display when the color of the target repeated across trials. These findings indicate that task relevance can cause otherwise inconspicuous items to pop out and highlight the need for direct neurophysiological measures when investigating the causes of search inefficiency. PMID:25203277

  9. Visual Search and Reading.

    ERIC Educational Resources Information Center

    Calfee, Robert C.; Jameson, Penny

    The effect on reading speed of the number of target items being searched for and the number of target occurrences in the text was examined. The subjects, 24 college undergraduate volunteers, were presented with a list of target words, and then they read a passage for comprehension which contained occurrences of the target words (Experiment1) or…

  10. Visual Search of Mooney Faces

    PubMed Central

    Goold, Jessica E.; Meng, Ming

    2016-01-01

    Faces spontaneously capture attention. However, which special attributes of a face underlie this effect is unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: (1) Although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention toward a face. (2) Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. (3) By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces; making faces a special object category for guiding attention. PMID:26903941

  11. Visual similarity effects in categorical search.

    PubMed

    Alexander, Robert G; Zelinsky, Gregory J

    2011-01-01

    We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets. PMID:21757505

  12. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  13. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  14. Statistical templates for visual search.

    PubMed

    Ackermann, John F; Landy, Michael S

    2014-01-01

    How do we find a target embedded in a scene? Within the framework of signal detection theory, this task is carried out by comparing each region of the scene with a "template," i.e., an internal representation of the search target. Here we ask what form this representation takes when the search target is a complex image with uncertain orientation. We examine three possible representations. The first is the matched filter. Such a representation cannot account for the ease with which humans can find a complex search target that is rotated relative to the template. A second representation attempts to deal with this by estimating the relative orientation of target and match and rotating the intensity-based template. No intensity-based template, however, can account for the ability to easily locate targets that are defined categorically and not in terms of a specific arrangement of pixels. Thus, we define a third template that represents the target in terms of image statistics rather than pixel intensities. Subjects performed a two-alternative, forced-choice search task in which they had to localize an image that matched a previously viewed target. Target images were texture patches. In one condition, match images were the same image as the target and distractors were a different image of the same textured material. In the second condition, the match image was of the same texture as the target (but different pixels) and the distractor was an image of a different texture. Match and distractor stimuli were randomly rotated relative to the target. We compared human performance to pixel-based, pixel-based with rotation, and statistic-based search models. The statistic-based search model was most successful at matching human performance. We conclude that humans use summary statistics to search for complex visual targets. PMID:24627458

  15. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  16. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  17. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  18. Visual Search for Faces with Emotional Expressions

    ERIC Educational Resources Information Center

    Frischen, Alexandra; Eastwood, John D.; Smilek, Daniel

    2008-01-01

    The goal of this review is to critically examine contradictory findings in the study of visual search for emotionally expressive faces. Several key issues are addressed: Can emotional faces be processed preattentively and guide attention? What properties of these faces influence search efficiency? Is search moderated by the emotional state of the…

  19. Cumulative Intertrial Inhibition in Repeated Visual Search

    ERIC Educational Resources Information Center

    Takeda, Yuji

    2007-01-01

    In the present study the author examined visual search when the items remain visible across trials but the location of the target varies. Reaction times for inefficient search cumulatively increased with increasing numbers of repeated search trials, suggesting that inhibition for distractors carried over successive trials. This intertrial…

  20. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  1. Frontal–Occipital Connectivity During Visual Search

    PubMed Central

    Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas

    2012-01-01

    Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993

  2. Words, shape, visual search and visual working memory in 3-year-old children

    PubMed Central

    Vales, Catarina; Smith, Linda B.

    2014-01-01

    Do words cue children’s visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

  3. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. PMID:24720802

  4. Temporal Stability of Visual Search-Driven Biometrics

    SciTech Connect

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  5. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  6. Visual search engine for product images

    NASA Astrophysics Data System (ADS)

    Lin, Xiaofan; Gokturk, Burak; Sumengen, Baris; Vu, Diem

    2008-01-01

    Nowadays there are many product comparison web sites. But most of them only use text information. This paper introduces a novel visual search engine for product images, which provides a brand-new way of visually locating products through Content-based Image Retrieval (CBIR) technology. We discusses the unique technical challenges, solutions, and experimental results in the design and implementation of this system.

  7. Superior Visual Search in Adults with Autism

    ERIC Educational Resources Information Center

    O'Riordan, Michelle

    2004-01-01

    Recent studies have suggested that children with autism perform better than matched controls on visual search tasks and that this stems from a superior visual discrimination ability. This study assessed whether these findings generalize from children to adults with autism. Experiments 1 and 2 showed that, like children, adults with autism were…

  8. Perceptual Encoding Efficiency in Visual Search

    ERIC Educational Resources Information Center

    Rauschenberger, Robert; Yantis, Steven

    2006-01-01

    The authors present 10 experiments that challenge some central assumptions of the dominant theories of visual search. Their results reveal that the complexity (or redundancy) of nontarget items is a crucial but overlooked determinant of search efficiency. The authors offer a new theoretical outline that emphasizes the importance of nontarget…

  9. The Search for Optimal Visual Stimuli

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    In 1983, Watson, Barlow and Robson published a brief report in which they explored the relative visibility of targets that varied in size, shape, spatial frequency, speed, and duration (referred to subsequently here as WBR). A novel aspect of that paper was that visibility was quantified in terms of threshold contrast energy, rather than contrast. As they noted, this provides a more direct measure of the efficiency with which various patterns are detected, and may be more edifying as to the underlying detection machinery. For example, under certain simple assumptions, the waveform of the most efficiently detected signal is an estimate of the receptive field of the visual system's most efficient detector. Thus one goal of their experiment Basuto search for the stimulus that the 'eye sees best'. Parenthetically, the search for optimal stimuli may be seen as the most general and sophisticated variant of the traditional 'subthreshold summation' experiment, in which one measures the effect upon visibility of small probes combined with a base stimulus.

  10. Graphical Representations of Electronic Search Patterns.

    ERIC Educational Resources Information Center

    Lin, Xia; And Others

    1991-01-01

    Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…

  11. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  12. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  13. Unsupervised Learning for Visual Pattern Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Nanning; Xue, Jianru

    This chapter presents an overview of topics and major concepts in unsupervised learning for visual pattern analysis. Cluster analysis and dimensionality are two important topics in unsupervised learning. Clustering relates to the grouping of similar objects in visual perception, while dimensionality reduction is essential for the compact representation of visual patterns. In this chapter, we focus on clustering techniques, offering first a theoretical basis, then a look at some applications in visual pattern analysis. With respect to the former, we introduce both concepts and algorithms. With respect to the latter, we discuss visual perceptual grouping. In particular, the problem of image segmentation is discussed in terms of contour and region grouping. Finally, we present a brief introduction to learning visual pattern representations, which serves as a prelude to the following chapters.

  14. Visual reinforcement shapes eye movements in visual search.

    PubMed

    Paeye, Céline; Schütz, Alexander C; Gegenfurtner, Karl R

    2016-08-01

    We use eye movements to gain information about our visual environment; this information can indirectly be used to affect the environment. Whereas eye movements are affected by explicit rewards such as points or money, it is not clear whether the information gained by finding a hidden target has a similar reward value. Here we tested whether finding a visual target can reinforce eye movements in visual search performed in a noise background, which conforms to natural scene statistics and contains a large number of possible target locations. First we tested whether presenting the target more often in one specific quadrant would modify eye movement search behavior. Surprisingly, participants did not learn to search for the target more often in high probability areas. Presumably, participants could not learn the reward structure of the environment. In two subsequent experiments we used a gaze-contingent display to gain full control over the reinforcement schedule. The target was presented more often after saccades into a specific quadrant or a specific direction. The proportions of saccades meeting the reinforcement criteria increased considerably, and participants matched their search behavior to the relative reinforcement rates of targets. Reinforcement learning seems to serve as the mechanism to optimize search behavior with respect to the statistics of the task. PMID:27559719

  15. Visual search under scotopic lighting conditions.

    PubMed

    Paulun, Vivian C; Schütz, Alexander C; Michel, Melchi M; Geisler, Wilson S; Gegenfurtner, Karl R

    2015-08-01

    When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases. PMID:25988753

  16. Online Search Patterns: NLM CATLINE Database.

    ERIC Educational Resources Information Center

    Tolle, John E.; Hah, Sehchang

    1985-01-01

    Presents analysis of online search patterns within user searching sessions of National Library of Medicine ELHILL system and examines user search patterns on the CATLINE database. Data previously analyzed on MEDLINE database for same period is used to compare the performance parameters of different databases within the same information system.…

  17. Visual Templates in Pattern Generalization Activity

    ERIC Educational Resources Information Center

    Rivera, F. D.

    2010-01-01

    In this research article, I present evidence of the existence of visual templates in pattern generalization activity. Such templates initially emerged from a 3-week design-driven classroom teaching experiment on pattern generalization involving linear figural patterns and were assessed for existence in a clinical interview that was conducted four…

  18. Dynamic Prototypicality Effects in Visual Search

    ERIC Educational Resources Information Center

    Kayaert, Greet; Op de Beeck, Hans P.; Wagemans, Johan

    2011-01-01

    In recent studies, researchers have discovered a larger neural activation for stimuli that are more extreme exemplars of their stimulus class, compared with stimuli that are more prototypical. This has been shown for faces as well as for familiar and novel shape classes. We used a visual search task to look for a behavioral correlate of these…

  19. Homo economicus in visual search.

    PubMed

    Navalpakkam, Vidhya; Koch, Christof; Perona, Pietro

    2009-01-01

    How do reward outcomes affect early visual performance? Previous studies found a suboptimal influence, but they ignored the non-linearity in how subjects perceived the reward outcomes. In contrast, we find that when the non-linearity is accounted for, humans behave optimally and maximize expected reward. Our subjects were asked to detect the presence of a familiar target object in a cluttered scene. They were rewarded according to their performance. We systematically varied the target frequency and the reward/penalty policy for detecting/missing the targets. We find that 1) decreasing the target frequency will decrease the detection rates, in accordance with the literature. 2) Contrary to previous studies, increasing the target detection rewards will compensate for target rarity and restore detection performance. 3) A quantitative model based on reward maximization accurately predicts human detection behavior in all target frequency and reward conditions; thus, reward schemes can be designed to obtain desired detection rates for rare targets. 4) Subjects quickly learn the optimal decision strategy; we propose a neurally plausible model that exhibits the same properties. Potential applications include designing reward schemes to improve detection of life-critical, rare targets (e.g., cancers in medical images). PMID:19271901

  20. Selective scanpath repetition during memory-guided visual search

    PubMed Central

    Wynn, Jordana S.; Bone, Michael B.; Dragan, Michelle C.; Hoffman, Kari L.; Buchsbaum, Bradley R.; Ryan, Jennifer D.

    2016-01-01

    ABSTRACT Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or “scanpath” elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1–V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity. PMID:27570471

  1. Do Multielement Visual Tracking and Visual Search Draw Continuously on the Same Visual Attention Resources?

    ERIC Educational Resources Information Center

    Alvarez, George A.; Horowitz, Todd S.; Arsenio, Helga C.; DiMase, Jennifer S.; Wolfe, Jeremy M.

    2005-01-01

    Multielement visual tracking and visual search are 2 tasks that are held to require visual-spatial attention. The authors used the attentional operating characteristic (AOC) method to determine whether both tasks draw continuously on the same attentional resource (i.e., whether the 2 tasks are mutually exclusive). The authors found that observers…

  2. Pattern Search Methods for Linearly Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush-Kuhn-Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.

  3. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  4. Coarse guidance by numerosity in visual search.

    PubMed

    Reijnen, Ester; Wolfe, Jeremy M; Krummenacher, Joseph

    2013-01-01

    In five experiments, we examined whether the number of items can guide visual focal attention. Observers searched for the target area with the largest (or smallest) number of dots (squares in Experiment 4 and "checkerboards" in Experiment 5) among distractor areas with a smaller (or larger) number of dots. Results of Experiments 1 and 2 show that search efficiency is determined by target to distractor dot ratios. In searches where target items contained more dots than did distractor items, ratios over 1.5:1 yielded efficient search. Searches for targets where target items contained fewer dots than distractor items were harder. Here, ratios needed to be lower than 1:2 to yield efficient search. When the areas of the dots and of the squares containing them were fixed, as they were in Experiments 1 and 2, dot density and total dot area increased as dot number increased. Experiment 3 removed the density and area cues by allowing dot size and total dot area to vary. This produced a marked decline in search performance. Efficient search now required ratios of above 3:1 or below 1:3. By using more realistic and isoluminant stimuli, Experiments 4 and 5 show that guidance by numerosity is fragile. As is found with other features that guide focal attention (e.g., color, orientation, size), the numerosity differences that are able to guide attention by bottom-up signals are much coarser than the differences that can be detected in attended stimuli. PMID:23070885

  5. Investigation of Neural Strategies of Visual Search

    NASA Technical Reports Server (NTRS)

    Krauzlis, Richard J.

    2003-01-01

    The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.

  6. Persistence in eye movement during visual search

    NASA Astrophysics Data System (ADS)

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-02-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  7. Persistence in eye movement during visual search

    PubMed Central

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  8. Persistence in eye movement during visual search.

    PubMed

    Amor, Tatiana A; Reis, Saulo D S; Campos, Daniel; Herrmann, Hans J; Andrade, José S

    2016-01-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search. PMID:26864680

  9. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  10. Parallel Mechanisms for Visual Search in Zebrafish

    PubMed Central

    Proulx, Michael J.; Parker, Matthew O.; Tahir, Yasser; Brennan, Caroline H.

    2014-01-01

    Parallel visual search mechanisms have been reported previously only in mammals and birds, and not animals lacking an expanded telencephalon such as bees. Here we report the first evidence for parallel visual search in fish using a choice task where the fish had to find a target amongst an increasing number of distractors. Following two-choice discrimination training, zebrafish were presented with the original stimulus within an increasing array of distractor stimuli. We found that zebrafish exhibit no significant change in accuracy and approach latency as the number of distractors increased, providing evidence of parallel processing. This evidence challenges theories of vertebrate neural architecture and the importance of an expanded telencephalon for the evolution of executive function. PMID:25353168

  11. Cardiac and Respiratory Responses During Visual Search in Nonretarded Children and Retarded Adolescents

    ERIC Educational Resources Information Center

    Porges, Stephen W.; Humphrey, Mary M.

    1977-01-01

    The relationship between physiological response patterns and mental competence was investigated by evaluating heart rate and respiratory responses during a sustained visual-search task in 29 nonretarded grade school children and 16 retarded adolescents. (Author)

  12. Configural learning in contextual cuing of visual search.

    PubMed

    Beesley, Tom; Vadillo, Miguel A; Pearson, Daniel; Shanks, David R

    2016-08-01

    Two experiments were conducted to explore the role of configural representations in contextual cuing of visual search. Repeating patterns of distractors (contexts) were trained incidentally as predictive of the target location. Training participants with repeating contexts of consistent configurations led to stronger contextual cuing than when participants were trained with contexts of inconsistent configurations. Computational simulations with an elemental associative learning model of contextual cuing demonstrated that purely elemental representations could not account for the results. However, a configural model of associative learning was able to simulate the ordinal pattern of data. (PsycINFO Database Record PMID:26913779

  13. Guided Text Search Using Adaptive Visual Analytics

    SciTech Connect

    Steed, Chad A; Symons, Christopher T; Senter, James K; DeNap, Frank A

    2012-10-01

    This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.

  14. A Visual Search Tool for Early Elementary Science Students.

    ERIC Educational Resources Information Center

    Revelle, Glenda; Druin, Allison; Platner, Michele; Bederson, Ben; Hourcade, Juan Pablo; Sherman, Lisa

    2002-01-01

    Reports on the development of a visual search interface called "SearchKids" to support children ages 5-10 years in their efforts to find animals in a hierarchical information structure. Investigates whether children can construct search queries to conduct complex searches if sufficiently supported both visually and conceptually. (Contains 27…

  15. Race Guides Attention in Visual Search.

    PubMed

    Otten, Marte

    2016-01-01

    It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female). Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear. PMID:26900957

  16. Race Guides Attention in Visual Search

    PubMed Central

    Otten, Marte

    2016-01-01

    It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female). Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear. PMID:26900957

  17. An active visual search interface for Medline.

    PubMed

    Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Wilson, Justin; Athey, Brian; Watson, Stanley J; Meng, Fan

    2007-01-01

    Searching the Medline database is almost a daily necessity for many biomedical researchers. However, available Medline search solutions are mainly designed for the quick retrieval of a small set of most relevant documents. Because of this search model, they are not suitable for the large-scale exploration of literature and the underlying biomedical conceptual relationships, which are common tasks in the age of high throughput experimental data analysis and cross-discipline research. We try to develop a new Medline exploration approach by incorporating interactive visualization together with powerful grouping, summary, sorting and active external content retrieval functions. Our solution, PubViz, is based on the FLEX platform designed for interactive web applications and its prototype is publicly available at: http://brainarray.mbni.med.umich.edu/Brainarray/DataMining/PubViz. PMID:17951838

  18. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology. PMID:19004680

  19. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    PubMed

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines. PMID:26356887

  20. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  1. Visual search performance by paranoid and chronic undifferentiated schizophrenics.

    PubMed

    Portnoff, L A; Yesavage, J A; Acker, M B

    1981-10-01

    Disturbances in attention are among the most frequent cognitive abnormalities in schizophrenia. Recent research has suggested that some schizophrenics have difficulty with visual tracking, which is suggestive of attentional deficits. To investigate differential visual-search performance by schizophrenics, 15 chronic undifferentiated and 15 paranoid schizophrenics were compared with 15 normals on two tests measuring visual search in a systematic and an unsystematic stimulus mode. Chronic schizophrenics showed difficulty with both kinds of visual-search tasks. In contrast, paranoids had only a deficit in the systematic visual-search task. Their ability for visual search in an unsystematized stimulus array was equivalent to that of normals. Although replication and cross-validation is needed to confirm these findings, it appears that the two tests of visual search may provide a useful ancillary method for differential diagnosis between these two types of schizophrenia. PMID:7312527

  2. Transition between different search patterns in human online search behavior

    NASA Astrophysics Data System (ADS)

    Wang, Xiangwen; Pleimling, Michel

    2015-03-01

    We investigate the human online search behavior by analyzing data sets from different search engines. Based on the comparison of the results from several click-through data-sets collected in different years, we observe a transition of the search pattern from a Lévy-flight-like behavior to a Brownian-motion-type behavior as the search engine algorithms improve. This result is consistent with findings in animal foraging processes. A more detailed analysis shows that the human search patterns are more complex than simple Lévy flights or Brownian motions. Notable differences between the behaviors of different individuals can be observed in many quantities. This work is in part supported by the US National Science Foundation through Grant DMR-1205309.

  3. Activation of phonological competitors in visual search.

    PubMed

    Görges, Frauke; Oppermann, Frank; Jescheniak, Jörg D; Schriefers, Herbert

    2013-06-01

    Recently, Meyer, Belke, Telling and Humphreys (2007) reported that competitor objects with homophonous names (e.g., boy) interfere with identifying a target object (e.g., buoy) in a visual search task, suggesting that an object name's phonology becomes automatically activated even in situations in which participants do not have the intention to speak. The present study explored the generality of this finding by testing a different phonological relation (rhyming object names, e.g., cat-hat) and by varying details of the experimental procedure. Experiment 1 followed the procedure by Meyer et al. Participants were familiarized with target and competitor objects and their names at the beginning of the experiment and the picture of the target object was presented prior to the search display on each trial. In Experiment 2, the picture of the target object presented prior to the search display was replaced by its name. In Experiment 3, participants were not familiarized with target and competitor objects and their names at the beginning of the experiment. A small interference effect from phonologically related competitors was obtained in Experiments 1 and 2 but not in Experiment 3, suggesting that the way the relevant objects are introduced to participants affects the chances of observing an effect from phonologically related competitors. Implications for the information flow in the conceptual-lexical system are discussed. PMID:23584102

  4. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  5. Pattern visual evoked potentials in hyperthyroidism.

    PubMed Central

    Mitchell, K W; Wood, C M; Howe, J W

    1988-01-01

    Pattern reversal visual evoked potentials (VEPs) have been elicited in 16 female hyperthyroid patients before and after treatment and compared with those from a similar group of age and sex matched control subjects. No effect on latency was seen, and although larger amplitude values were noted in the thyrotoxic group these too were not significant. We would conclude that hyperthyroidism per se has little effect on the pattern reversal VEP, and any observed effect on these potentials is probably due to other factors. PMID:3415945

  6. Signatures of chaos in animal search patterns

    PubMed Central

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-01-01

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties. PMID:27019951

  7. Signatures of chaos in animal search patterns.

    PubMed

    Reynolds, Andy M; Bartumeus, Frederic; Kölzsch, Andrea; van de Koppel, Johan

    2016-01-01

    One key objective of the emerging discipline of movement ecology is to link animal movement patterns to underlying biological processes, including those operating at the neurobiological level. Nonetheless, little is known about the physiological basis of animal movement patterns, and the underlying search behaviour. Here we demonstrate the hallmarks of chaotic dynamics in the movement patterns of mud snails (Hydrobia ulvae) moving in controlled experimental conditions, observed in the temporal dynamics of turning behaviour. Chaotic temporal dynamics are known to occur in pacemaker neurons in molluscs, but there have been no studies reporting on whether chaotic properties are manifest in the movement patterns of molluscs. Our results suggest that complex search patterns, like the Lévy walks made by mud snails, can have their mechanistic origins in chaotic neuronal processes. This possibility calls for new research on the coupling between neurobiology and motor properties. PMID:27019951

  8. Visual search behaviour during laparoscopic cadaveric procedures

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Gale, Alastair G.; Rees, Benjamin; Maxwell-Armstrong, Charles

    2014-03-01

    Laparoscopic surgery provides a very complex example of medical image interpretation. The task entails: visually examining a display that portrays the laparoscopic procedure from a varying viewpoint; eye-hand coordination; complex 3D interpretation of the 2D display imagery; efficient and safe usage of appropriate surgical tools, as well as other factors. Training in laparoscopic surgery typically entails practice using surgical simulators. Another approach is to use cadavers. Viewing previously recorded laparoscopic operations is also a viable additional approach and to examine this a study was undertaken to determine what differences exist between where surgeons look during actual operations and where they look when simply viewing the same pre-recorded operations. It was hypothesised that there would be differences related to the different experimental conditions; however the relative nature of such differences was unknown. The visual search behaviour of two experienced surgeons was recorded as they performed three types of laparoscopic operations on a cadaver. The operations were also digitally recorded. Subsequently they viewed the recording of their operations, again whilst their eye movements were monitored. Differences were found in various eye movement parameters when the two surgeons performed the operations and where they looked when they simply watched the recordings of the operations. It is argued that this reflects the different perceptual motor skills pertinent to the different situations. The relevance of this for surgical training is explored.

  9. Visual search and eye movements in novel and familiar contexts

    NASA Astrophysics Data System (ADS)

    McDermott, Kyle; Mulligan, Jeffrey B.; Bebis, George; Webster, Michael A.

    2006-02-01

    Adapting to the visual characteristics of a specific environment may facilitate detecting novel stimuli within that environment. We monitored eye movements while subjects searched for a color target on familiar or unfamiliar color backgrounds, in order to test for these performance changes and to explore whether they reflect changes in salience from adaptation vs. changes in search strategies or perceptual learning. The target was an ellipse of variable color presented at a random location on a dense background of ellipses. In one condition, the colors of the background varied along either the LvsM or SvsLM cardinal axes. Observers adapted by viewing a rapid succession of backgrounds drawn from one color axis, and then searched for a target on a background from the same or different color axis. Searches were monitored with a Cambridge Research Systems Video Eyetracker. Targets were located more quickly on the background axis that observers were pre-exposed to, confirming that this exposure can improve search efficiency for stimuli that differ from the background. However, eye movement patterns (e.g. fixation durations and saccade magnitudes) did not clearly differ across the two backgrounds, suggesting that how the novel and familiar backgrounds were sampled remained similar. In a second condition, we compared search on a nonselective color background drawn from a circle of hues at fixed contrast. Prior exposure to this background did not facilitate search compared to an achromatic adapting field, suggesting that subjects were not simply learning the specific colors defining the background distributions. Instead, results for both conditions are consistent with a selective adaptation effect that enhances the salience of novel stimuli by partially discounting the background.

  10. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  11. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  12. The Time Course of Similarity Effects in Visual Search

    ERIC Educational Resources Information Center

    Guest, Duncan; Lamberts, Koen

    2011-01-01

    It is well established that visual search becomes harder when the similarity between target and distractors is increased and the similarity between distractors is decreased. However, in models of visual search, similarity is typically treated as a static, time-invariant property of the relation between objects. Data from other perceptual tasks…

  13. Spatial Constraints on Learning in Visual Search: Modeling Contextual Cuing

    ERIC Educational Resources Information Center

    Brady, Timothy F.; Chun, Marvin M.

    2007-01-01

    Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using…

  14. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  15. Vocal Dynamic Visual Pattern for voice characterization

    NASA Astrophysics Data System (ADS)

    Dajer, M. E.; Andrade, F. A. S.; Montagnoli, A. N.; Pereira, J. C.; Tsuji, D. H.

    2011-12-01

    Voice assessment requires simple and painless exams. Modern technologies provide the necessary resources for voice signal processing. Techniques based on nonlinear dynamics seem to asses the complexity of voice more accurately than other methods. Vocal dynamic visual pattern (VDVP) is based on nonlinear methods and provides qualitative and quantitative information. Here we characterize healthy and Reinke's edema voices by means of perturbation measures and VDVP analysis. VDPD and jitter show different results for both groups, while amplitude perturbation has no difference. We suggest that VDPD analysis improve and complement the evaluation methods available for clinicians.

  16. Visual Search Deficits Are Independent of Magnocellular Deficits in Dyslexia

    ERIC Educational Resources Information Center

    Wright, Craig M.; Conlon, Elizabeth G.; Dyck, Murray

    2012-01-01

    The aim of this study was to investigate the theory that visual magnocellular deficits seen in groups with dyslexia are linked to reading via the mechanisms of visual attention. Visual attention was measured with a serial search task and magnocellular function with a coherent motion task. A large group of children with dyslexia (n = 70) had slower…

  17. Usage Patterns of an Online Search System.

    ERIC Educational Resources Information Center

    Cooper, Michael D.

    1983-01-01

    Examines usage patterns of ELHILL retrieval program of National Library of Medicine's MEDLARS system. Based on sample of 6,759 searches, the study analyzes frequency of various commands, classifies messages issued by system, and investigates searcher error rates. Suggestions for redesigning program and query language are noted. Seven references…

  18. Online search patterns: NLM CATLINE database.

    PubMed

    Tolle, J E; Hah, S

    1985-03-01

    In this article the authors present their analysis of the online search patterns within user searching sessions of the National Library of Medicine ELHILL system and examine the user search patterns on the CATLINE database. In addition to the CATLINE analysis, a comparison is made using data previously analyzed on the MEDLINE database for the same time period, thus offering an opportunity to compare the performance parameters of different databases within the same information system. Data collection covers eight weeks and includes 441,282 transactions and over 11,067 user sessions, which accounted for 1680 hours of system usage. The descriptive analysis contained in this report can assists system design activities, while the predictive power of the transaction log analysis methodology may assists the development of real-time aids. PMID:10300015

  19. Competing Distractors Facilitate Visual Search in Heterogeneous Displays

    PubMed Central

    Kong, Garry; Alais, David; Van der Burg, Erik

    2016-01-01

    In the present study, we examine how observers search among complex displays. Participants were asked to search for a big red horizontal line among 119 distractor lines of various sizes, orientations and colours, leading to 36 different feature combinations. To understand how people search in such a heterogeneous display, we evolved the search display by using a genetic algorithm (Experiment 1). The best displays (i.e., displays corresponding to the fastest reaction times) were selected and combined to create new, evolved displays. Search times declined over generations. Results show that items sharing the same colour and orientation as the target disappeared over generations, implying they interfered with search, but items sharing the same colour and were 12.5° different in orientation only interfered if they were also the same size. Furthermore, and inconsistent with most dominant visual search theories, we found that non-red horizontal distractors increased over generations, indicating that these distractors facilitated visual search while participants were searching for a big red horizontally oriented target. In Experiments 2 and 3, we replicated these results using conventional, factorial experiments. Interestingly, in Experiment 4, we found that this facilitation effect was only present when the displays were very heterogeneous. While current models of visual search are able to successfully describe search in homogeneous displays, our results challenge the ability of these models to describe visual search in heterogeneous environments. PMID:27508298

  20. Online multiple kernel similarity learning for visual search.

    PubMed

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly. PMID:24457509

  1. Visual search in a forced-choice paradigm

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.

  2. Searching for intellectual turning points: Progressive knowledge domain visualization

    PubMed Central

    Chen, Chaomei

    2004-01-01

    This article introduces a previously undescribed method progressively visualizing the evolution of a knowledge domain's cocitation network. The method first derives a sequence of cocitation networks from a series of equal-length time interval slices. These time-registered networks are merged and visualized in a panoramic view in such a way that intellectually significant articles can be identified based on their visually salient features. The method is applied to a cocitation study of the superstring field in theoretical physics. The study focuses on the search of articles that triggered two superstring revolutions. Visually salient nodes in the panoramic view are identified, and the nature of their intellectual contributions is validated by leading scientists in the field. The analysis has demonstrated that a search for intellectual turning points can be narrowed down to visually salient nodes in the visualized network. The method provides a promising way to simplify otherwise cognitively demanding tasks to a search for landmarks, pivots, and hubs. PMID:14724295

  3. The Roles of Non-retinotopic Motions in Visual Search

    PubMed Central

    Nakayama, Ryohei; Motoyoshi, Isamu; Sato, Takao

    2016-01-01

    In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate systems—retinal, relative, and spatiotopic (head/body-centered). Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinate systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search. PMID:27313560

  4. The Serial Process in Visual Search

    ERIC Educational Resources Information Center

    Gilden, David L.; Thornton, Thomas L.; Marusich, Laura R.

    2010-01-01

    The conditions for serial search are described. A multiple target search methodology (Thornton & Gilden, 2007) is used to home in on the simplest target/distractor contrast that effectively mandates a serial scheduling of attentional resources. It is found that serial search is required when (a) targets and distractors are mirror twins, and (b)…

  5. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  6. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task.

    PubMed

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-07-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global "priority map" that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  7. Visual pattern recognition in Drosophila is invariant for retinal position.

    PubMed

    Tang, Shiming; Wolf, Reinhard; Xu, Shuping; Heisenberg, Martin

    2004-08-13

    Vision relies on constancy mechanisms. Yet, these are little understood, because they are difficult to investigate in freely moving organisms. One such mechanism, translation invariance, enables organisms to recognize visual patterns independent of the region of their visual field where they had originally seen them. Tethered flies (Drosophila melanogaster) in a flight simulator can recognize visual patterns. Because their eyes are fixed in space and patterns can be displayed in defined parts of their visual field, they can be tested for translation invariance. Here, we show that flies recognize patterns at retinal positions where the patterns had not been presented before. PMID:15310908

  8. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  9. Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency

    PubMed Central

    Sripati, Arun P.; Olson, Carl R.

    2010-01-01

    Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

  10. Do People Take Stimulus Correlations into Account in Visual Search?

    PubMed Central

    Bhardwaj, Manisha; van den Berg, Ronald

    2016-01-01

    In laboratory visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often correlated and rarely independent. Here, we examine whether human observers take stimulus correlations into account in orientation target detection. We find that they do, although probably not optimally. In particular, it seems that low distractor correlations are overestimated. Our results might contribute to bridging the gap between artificial and natural visual search tasks. PMID:26963498

  11. 'Where' and 'what' in visual search.

    PubMed

    Atkinson, J; Braddick, O J

    1989-01-01

    A line segment target can be detected among distractors of a different orientation by a fast 'preattentive' process. One view is that this depends on detection of a 'feature gradient', which enables subjects to locate where the target is without necessarily identifying what it is. An alternative view is that a target can be identified as distinctive in a particular 'feature map' without subjects knowing where it is in that map. Experiments are reported in which briefly exposed arrays of line segments were followed by a pattern mask, and the threshold stimulus-mask interval determined for three tasks: 'what'--subjects reported whether the target was vertical or horizontal among oblique distractors; 'coarse where'--subjects reported whether the target was in the upper or lower half of the array; 'fine where'--subjects reported whether or not the target was in a set of four particular array positions. The threshold interval was significantly lower for the 'coarse where' than for the 'what' task, indicating that, even though localization in this task depends on the target's orientation difference, this localization is possible without absolute identification of target orientation. However, for the 'fine where' task, intervals as long as or longer than those for the 'what' task were required. It appears either that different localization processes work at different levels of resolution, or that a single localization process, independent of identification, can increase its resolution at the expense of processing speed. These possibilities are discussed in terms of distinct neural representations of the visual field and fixed or variable localization processes acting upon them. PMID:2771603

  12. Conjunctive Visual Search in Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael; Chrysler, Christina; Sullivan, Kate

    2007-01-01

    A comprehensive understanding of the basic visual and cognitive abilities of individuals with mental retardation is critical for understanding the basis of mental retardation and for the design of remediation programs. We assessed visual search abilities in individuals with mild mental retardation and in MA- and CA-matched comparison groups. Our…

  13. Visual Search by Children with and without ADHD

    ERIC Educational Resources Information Center

    Mullane, Jennifer C.; Klein, Raymond M.

    2008-01-01

    Objective: To summarize the literature that has employed visual search tasks to assess automatic and effortful selective visual attention in children with and without ADHD. Method: Seven studies with a combined sample of 180 children with ADHD (M age = 10.9) and 193 normally developing children (M age = 10.8) are located. Results: Using a…

  14. Changing Perspective: Zooming in and out during Visual Search

    ERIC Educational Resources Information Center

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  15. Why Is Visual Search Superior in Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Joseph, Robert M.; Keehn, Brandon; Connolly, Christine; Wolfe, Jeremy M.; Horowitz, Todd S.

    2009-01-01

    This study investigated the possibility that enhanced memory for rejected distractor locations underlies the superior visual search skills exhibited by individuals with autism spectrum disorder (ASD). We compared the performance of 21 children with ASD and 21 age- and IQ-matched typically developing (TD) children in a standard static search task…

  16. Pip and Pop: Nonspatial Auditory Signals Improve Spatial Visual Search

    ERIC Educational Resources Information Center

    Van der Burg, Erik; Olivers, Christian N. L.; Bronkhorst, Adelbert W.; Theeuwes, Jan

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location…

  17. Visual Search Asymmetry with Uncertain Targets

    ERIC Educational Resources Information Center

    Saiki, Jun; Koike, Takahiko; Takahashi, Kohske; Inoue, Tomoko

    2005-01-01

    The underlying mechanism of search asymmetry is still unknown. Many computational models postulate top-down selection of target-defining features as a crucial factor. This feature selection account implies, and other theories implicitly assume, that predefined target identity is necessary for search asymmetry. The authors tested the validity of…

  18. Individual Differences and Metacognitive Knowledge of Visual Search Strategy

    PubMed Central

    Proulx, Michael J.

    2011-01-01

    A crucial ability for an organism is to orient toward important objects and to ignore temporarily irrelevant objects. Attention provides the perceptual selectivity necessary to filter an overwhelming input of sensory information to allow for efficient object detection. Although much research has examined visual search and the ‘template’ of attentional set that allows for target detection, the behavior of individual subjects often reveals the limits of experimental control of attention. Few studies have examined important aspects such as individual differences and metacognitive strategies. The present study analyzes the data from two visual search experiments for a conjunctively defined target (Proulx, 2007). The data revealed attentional capture blindness, individual differences in search strategies, and a significant rate of metacognitive errors for the assessment of the strategies employed. These results highlight a challenge for visual attention studies to account for individual differences in search behavior and distractibility, and participants that do not (or are unable to) follow instructions. PMID:22066030

  19. Visual Search in a Multi-Element Asynchronous Dynamic (MAD) World

    ERIC Educational Resources Information Center

    Kunar, Melina A.; Watson, Derrick G.

    2011-01-01

    In visual search tasks participants search for a target among distractors in strictly controlled displays. We show that visual search principles observed in these tasks do not necessarily apply in more ecologically valid search conditions, using dynamic and complex displays. A multi-element asynchronous dynamic (MAD) visual search was developed in…

  20. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills

    PubMed Central

    Leff, Daniel R.; James, David R. C.; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W.; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a “Collaborative Gaze Channel” (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee’s O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  1. The impact of expert visual guidance on trainee visual search strategy, visual attention and motor skills.

    PubMed

    Leff, Daniel R; James, David R C; Orihuela-Espina, Felipe; Kwok, Ka-Wai; Sun, Loi Wah; Mylonas, George; Athanasiou, Thanos; Darzi, Ara W; Yang, Guang-Zhong

    2015-01-01

    Minimally invasive and robotic surgery changes the capacity for surgical mentors to guide their trainees with the control customary to open surgery. This neuroergonomic study aims to assess a "Collaborative Gaze Channel" (CGC); which detects trainer gaze-behavior and displays the point of regard to the trainee. A randomized crossover study was conducted in which twenty subjects performed a simulated robotic surgical task necessitating collaboration either with verbal (control condition) or visual guidance with CGC (study condition). Trainee occipito-parietal (O-P) cortical function was assessed with optical topography (OT) and gaze-behavior was evaluated using video-oculography. Performance during gaze-assistance was significantly superior [biopsy number: (mean ± SD): control = 5.6 ± 1.8 vs. CGC = 6.6 ± 2.0; p < 0.05] and was associated with significantly lower O-P cortical activity [ΔHbO2 mMol × cm [median (IQR)] control = 2.5 (12.0) vs. CGC 0.63 (11.2), p < 0.001]. A random effect model (REM) confirmed the association between guidance mode and O-P excitation. Network cost and global efficiency were not significantly influenced by guidance mode. A gaze channel enhances performance, modulates visual search, and alleviates the burden in brain centers subserving visual attention and does not induce changes in the trainee's O-P functional network observable with the current OT technique. The results imply that through visual guidance, attentional resources may be liberated, potentially improving the capability of trainees to attend to other safety critical events during the procedure. PMID:26528160

  2. Parallel and Serial Processes in Visual Search

    ERIC Educational Resources Information Center

    Thornton, Thomas L.; Gilden, David L.

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue. However, the results remain largely inconclusive because the methodologies that have…

  3. Visual Search and the Collapse of Categorization

    ERIC Educational Resources Information Center

    David, Smith, J.; Redford, Joshua S.; Gent, Lauren C.; Washburn, David A.

    2005-01-01

    Categorization researchers typically present single objects to be categorized. But real-world categorization often involves object recognition within complex scenes. It is unknown how the processes of categorization stand up to visual complexity or why they fail facing it. The authors filled this research gap by blending the categorization and…

  4. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    ERIC Educational Resources Information Center

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  5. Visual search and attention to faces during early infancy.

    PubMed

    Frank, Michael C; Amso, Dima; Johnson, Scott P

    2014-02-01

    Newborn babies look preferentially at faces and face-like displays, yet over the course of their first year much changes about both the way infants process visual stimuli and how they allocate their attention to the social world. Despite this initial preference for faces in restricted contexts, the amount that infants look at faces increases considerably during the first year. Is this development related to changes in attentional orienting abilities? We explored this possibility by showing 3-, 6-, and 9-month-olds engaging animated and live-action videos of social stimuli and also measuring their visual search performance with both moving and static search displays. Replicating previous findings, looking at faces increased with age; in addition, the amount of looking at faces was strongly related to the youngest infants' performance in visual search. These results suggest that infants' attentional abilities may be an important factor in facilitating their social attention early in development. PMID:24211654

  6. Size Scaling in Visual Pattern Recognition

    ERIC Educational Resources Information Center

    Larsen, Axel; Bundesen, Claus

    1978-01-01

    Human visual recognition on the basis of shape but regardless of size was investigated by reaction time methods. Results suggested two processes of size scaling: mental-image transformation and perceptual-scale transformation. Image transformation accounted for matching performance based on visual short-term memory, whereas scale transformation…

  7. Audio-visual stimulation improves oculomotor patterns in patients with hemianopia.

    PubMed

    Passamonti, Claudia; Bertini, Caterina; Làdavas, Elisabetta

    2009-01-01

    Patients with visual field disorders often exhibit impairments in visual exploration and a typical defective oculomotor scanning behaviour. Recent evidence [Bolognini, N., Rasi, F., Coccia, M., & Làdavas, E. (2005b). Visual search improvement in hemianopic patients after audio-visual stimulation. Brain, 128, 2830-2842] suggests that systematic audio-visual stimulation of the blind hemifield can improve accuracy and search times in visual exploration, probably due to the stimulation of Superior Colliculus (SC), an important multisensory structure involved in both the initiation and execution of saccades. The aim of the present study is to verify this hypothesis by studying the effects of multisensory training on oculomotor scanning behaviour. Oculomotor responses during a visual search task and a reading task were studied before and after visual (control) or audio-visual (experimental) training, in a group of 12 patients with chronic visual field defects and 12 controls subjects. Eye movements were recorded using an infra-red technique which measured a range of spatial and temporal variables. Prior to treatment, patients' performance was significantly different from that of controls in relation to fixations and saccade parameters; after Audio-Visual Training, all patients reported an improvement in ocular exploration characterized by fewer fixations and refixations, quicker and larger saccades, and reduced scanpath length. Overall, these improvements led to a reduction of total exploration time. Similarly, reading parameters were significantly affected by the training, with respect to specific impairments observed in both left- and right-hemianopia readers. Our findings provide evidence that Audio-Visual Training, by stimulating the SC, may induce a more organized pattern of visual exploration due to an implementation of efficient oculomotor strategies. Interestingly, the improvement was found to be stable at a 1 year follow-up control session, indicating a long

  8. Group-level differences in visual search asymmetry.

    PubMed

    Cramer, Emily S; Dusko, Michelle J; Rensink, Ronald A

    2016-08-01

    East Asians and Westerners differ in various aspects of perception and cognition. For example, visual memory for East Asians is believed to be more influenced by the contextual aspects of a scene than is the case for Westerners (Masuda & Nisbett in Journal of Personality and Social Psychology, 81, 922-934, 2001). There are also differences in visual search: For Westerners, search is faster for a long line among short ones than for a short line among long ones, whereas this difference does not appear to hold for East Asians (Ueda et al., 2016). However, it is unclear how these group-level differences originate. To investigate the extent to which they depend upon environment, we tested visual search and visual memory in East Asian immigrants who had lived in Canada for different amounts of time. Recent immigrants were found to exhibit no search asymmetry, unlike Westerners who had spent their lives in Canada. However, immigrants who had lived in Canada for more than 2 years showed performance comparable to that of Westerners. These differences could not be explained by the general analytic/holistic processing distinction believed to differentiate Westerners and East Asians, since all observers showed a strong holistic tendency for visual recognition. The results instead support the suggestion that exposure to a new environment can significantly affect the particular processes used to perceive a given stimulus. PMID:27270735

  9. Learned face-voice pairings facilitate visual search

    PubMed Central

    Zweig, L. Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2014-01-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information and they facilitate localization of the sought individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face/voice associations and verified learning using a two-alternative forced-choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent-learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent-learned voice speeded visual search for the associated face. This result suggests that voices facilitate visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training. PMID:25023955

  10. Learned face-voice pairings facilitate visual search.

    PubMed

    Zweig, L Jacob; Suzuki, Satoru; Grabowecky, Marcia

    2015-04-01

    Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information, and they facilitate localization of the sought-after individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face-voice associations and verified learning using a two-alternative forced choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent learned voice speeded visual search for the associated face. This result suggests that voices facilitate the visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training. PMID:25023955

  11. Losing the trees for the forest in dynamic visual search.

    PubMed

    Jardine, Nicole L; Moore, Cathleen M

    2016-05-01

    Representing temporally continuous objects across change (e.g., in position) requires integration of newly sampled visual information with existing object representations. We asked what consequences representational updating has for visual search. In this dynamic visual search task, bars rotated around their central axis. Observers searched for a single episodic target state (oblique bar among vertical and horizontal bars). Search was efficient when the target display was presented as an isolated static display. Performance declined to near chance, however, when the same display was a single state of a dynamically changing scene (Experiment 1), as though temporal selection of the target display from the stream of stimulation failed entirely (Experiment 3). The deficit is attributable neither to masking (Experiment 2), nor to a lack of temporal marker for the target display (Experiment 4). The deficit was partially reduced by visually marking the target display with unique feature information (Experiment 5). We suggest that representational updating causes a loss of access to instantaneous state information in search. Similar to spatially crowded displays that are perceived as textures (Parkes, Lund, Angelucci, Solomon, & Morgan, 2001), we propose a temporal version of the trees (instantaneous orientation information) being lost for the forest (rotating bars). (PsycINFO Database Record PMID:26689307

  12. Visual Exploratory Search of Relationship Graphs on Smartphones

    PubMed Central

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  13. Visual exploratory search of relationship graphs on smartphones.

    PubMed

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones' user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users' capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  14. Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search

    ERIC Educational Resources Information Center

    Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

    2011-01-01

    In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

  15. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  16. The effect of a visual indicator on rate of visual search Evidence for processing control

    NASA Technical Reports Server (NTRS)

    Holmgren, J. E.

    1974-01-01

    Search rates were estimated from response latencies in a visual search task of the type used by Atkinson et al. (1969), in which a subject searches a small set of letters to determine the presence or absence of a predesignated target. Half of the visual displays contained a marker above one of the letters. The marked letter was the only one that had to be checked to determine whether or not the display contained the target. The presence of a marker in a display significantly increased the estimated rate of search, but the data clearly indicated that subjects did not restrict processing to the marked item. Letters in the vicinity of the marker were also processed. These results were interpreted as showing that subjects are able to exercise some degree of control over the search process in this type of task.

  17. Visual search for arbitrary objects in real scenes

    PubMed Central

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  18. Comparing target detection errors in visual search and manually-assisted search.

    PubMed

    Solman, Grayden J F; Hickey, Kersondra; Smilek, Daniel

    2014-05-01

    Subjects searched for low- or high-prevalence targets among static nonoverlapping items or items piled in heaps that could be moved using a computer mouse. We replicated the classical prevalence effect both in visual search and when unpacking items from heaps, with more target misses under low prevalence. Moreover, we replicated our previous finding that while unpacking, people often move the target item without noticing (the unpacking error) and determined that these errors also increase under low prevalence. On the basis of a comparison of item movements during the manually-assisted search and eye movements during static visual search, we suggest that low prevalence leads to broadly reduced diligence during search but that the locus of this reduced diligence depends on the nature of the task. In particular, while misses during visual search often arise from a failure to inspect all of the items, misses during manually-assisted search more often result from a failure to adequately inspect individual items. Indeed, during manually-assisted search, over 90 % of target misses occurred despite subjects having moved the target item during search. PMID:24554230

  19. Enhancing Visual Search Abilities of People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…

  20. Attention Capacity and Task Difficulty in Visual Search

    ERIC Educational Resources Information Center

    Huang, Liqiang; Pashler, Harold

    2005-01-01

    When a visual search task is very difficult (as when a small feature difference defines the target), even detection of a unique element may be substantially slowed by increases in display set size. This has been attributed to the influence of attentional capacity limits. We examined the influence of attentional capacity limits on three kinds of…

  1. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    Energy Science and Technology Software Center (ESTSC)

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce

  2. Accurate expectancies diminish perceptual distraction during visual search

    PubMed Central

    Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

    2014-01-01

    The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

  3. Bumblebee visual search for multiple learned target types.

    PubMed

    Nityananda, Vivek; Pattrick, Jonathan G

    2013-11-15

    Visual search is well studied in human psychology, but we know comparatively little about similar capacities in non-human animals. It is sometimes assumed that animal visual search is restricted to a single target at a time. In bees, for example, this limitation has been evoked to explain flower constancy, the tendency of bees to specialise on a single flower type. Few studies, however, have investigated bee visual search for multiple target types after extended learning and controlling for prior visual experience. We trained colour-naive bumblebees (Bombus terrestris) extensively in separate discrimination tasks to recognise two rewarding colours in interspersed block training sessions. We then tested them with the two colours simultaneously in the presence of distracting colours to examine whether and how quickly they were able to switch between the target colours. We found that bees switched between visual targets quickly and often. The median time taken to switch between targets was shorter than known estimates of how long traces last in bees' working memory, suggesting that their capacity to recall more than one learned target was not restricted by working memory limitations. Following our results, we propose a model of memory and learning that integrates our findings with those of previous studies investigating flower constancy. PMID:23948481

  4. Is pop-out visual search attentive or preattentive? Yes!

    PubMed

    Lagroix, Hayley E P; Di Lollo, Vincent; Spalek, Thomas M

    2015-04-01

    Is the efficiency of "pop-out" visual search impaired when attention is preempted by another task? This question has been raised in earlier experiments but has not received a satisfactory answer. To constrain the availability of attention, those experiments employed an attentional blink (AB) paradigm in which report of the second of 2 targets (T2) is impaired when it is presented shortly after the first (T1). In those experiments, T2 was a pop-out search display that remained on view until response. The main finding was that search efficiency, as indexed by the slope of the search function, was not impaired during the period of the AB. With such long displays, however, the search could be postponed until T1 had been processed, thus allowing the task to be performed with full attention. That pitfall was avoided in the present Experiment 1 by presenting the search array either until response (thus allowing a postponement strategy) or very briefly (making that strategy ineffectual). Level of performance was impaired during the period of the AB, but search efficiency was unimpaired even when the display was brief. Experiment 2 showed that visual search is indeed postponed during the period of the AB, when the array remains on view until response. These findings reveal the action of at least 2 separable mechanisms, indexed by level and efficiency of pop-out search, which are affected in different ways by the availability of attention. The Guided Search 4.0 model can account for the results in both level and efficiency. PMID:25706768

  5. Attention during visual search: The benefit of bilingualism

    PubMed Central

    Friesen, Deanna C; Latman, Vered; Calvo, Alejandra; Bialystok, Ellen

    2015-01-01

    Aims and Objectives/Purpose/Research Questions Following reports showing bilingual advantages in executive control (EC) performance, the current study investigated the role of selective attention as a foundational skill that might underlie these advantages. Design/Methodology/Approach Bilingual and monolingual young adults performed a visual search task by determining whether a target shape was present amid distractor shapes. Task difficulty was manipulated by search type (feature or conjunction) and by the number and discriminability of the distractors. In feature searches, the target (e.g., green triangle) differed on a single dimension (e.g., color) from the distractors (e.g., yellow triangles); in conjunction searches, two types of distractors (e.g., pink circles and turquoise squares) each differed from the target (e.g., turquoise circle) on a single but different dimension (e.g., color or shape). Data and Analysis Reaction time and accuracy data from 109 young adults (53 monolinguals and 56 bilinguals) were analyzed using a repeated-measures analysis of variance. Group membership, search type, number and discriminability of distractors were the independent variables. Findings/Conclusions Participants identified the target more quickly in the feature searches, when the target was highly discriminable from the distractors and when there were fewer distractors. Importantly, although monolinguals and bilinguals performed equivalently on the feature searches, bilinguals were significantly faster than monolinguals in identifying the target in the more difficult conjunction search, providing evidence for better control of visual attention in bilinguals Originality Unlike previous studies on bilingual visual attention, the current study found a bilingual attention advantage in a paradigm that did not include a Stroop-like manipulation to set up false expectations. Significance/Implications Thus, our findings indicate that the need to resolve explicit conflict or

  6. Irrelevant objects of expertise compete with faces during visual search

    PubMed Central

    McGugin, Rankin W.; McKeeff, Thomas J.; Tong, Frank; Gauthier, Isabel

    2010-01-01

    Prior work suggests that non-face objects of expertise can interfere with the perception of faces when the two categories are alternately presented, suggesting competition for shared perceptual resources. Here we ask whether task-irrelevant distractors from a category of expertise compete when faces are presented in a standard visual search task. Participants searched for a target (face or sofa) in an array containing both relevant and irrelevant distractors. The number of distractors from the target category (face or sofa) remained constant, while the number of distractors from the irrelevant category (cars) varied. Search slopes, calculated as a function of the number of irrelevant cars, were correlated with car expertise. The effect was not due to car distractors grabbing attention because they did not compete with sofa targets. Objects of expertise interfere with face perception even when they are task irrelevant, visually distinct and separated in space from faces. PMID:21264705

  7. Entrainment of Human Alpha Oscillations Selectively Enhances Visual Conjunction Search

    PubMed Central

    Müller, Notger G.; Vellage, Anne-Katrin; Heinze, Hans-Jochen; Zaehle, Tino

    2015-01-01

    The functional role of the alpha-rhythm which dominates the human electroencephalogram (EEG) is unclear. It has been related to visual processing, attentional selection and object coherence, respectively. Here we tested the interaction of alpha oscillations of the human brain with visual search tasks that differed in their attentional demands (pre-attentive vs. attentive) and also in the necessity to establish object coherence (conjunction vs. single feature). Between pre- and post-assessment elderly subjects received 20 min/d of repetitive transcranial alternating current stimulation (tACS) over the occipital cortex adjusted to their individual alpha frequency over five consecutive days. Compared to sham the entrained alpha oscillations led to a selective, set size independent improvement in the conjunction search task performance but not in the easy or in the hard feature search task. These findings suggest that cortical alpha oscillations play a specific role in establishing object coherence through suppression of distracting objects. PMID:26606255

  8. Entrainment of Human Alpha Oscillations Selectively Enhances Visual Conjunction Search.

    PubMed

    Müller, Notger G; Vellage, Anne-Katrin; Heinze, Hans-Jochen; Zaehle, Tino

    2015-01-01

    The functional role of the alpha-rhythm which dominates the human electroencephalogram (EEG) is unclear. It has been related to visual processing, attentional selection and object coherence, respectively. Here we tested the interaction of alpha oscillations of the human brain with visual search tasks that differed in their attentional demands (pre-attentive vs. attentive) and also in the necessity to establish object coherence (conjunction vs. single feature). Between pre- and post-assessment elderly subjects received 20 min/d of repetitive transcranial alternating current stimulation (tACS) over the occipital cortex adjusted to their individual alpha frequency over five consecutive days. Compared to sham the entrained alpha oscillations led to a selective, set size independent improvement in the conjunction search task performance but not in the easy or in the hard feature search task. These findings suggest that cortical alpha oscillations play a specific role in establishing object coherence through suppression of distracting objects. PMID:26606255

  9. The Mechanisms Underlying the ASD Advantage in Visual Search.

    PubMed

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik

    2016-05-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests. PMID:24091470

  10. LASAGNA-Search: an integrated web tool for transcription factor binding site search and visualization.

    PubMed

    Lee, Chic; Huang, Chun-Hsi

    2013-03-01

    The release of ChIP-seq data from the ENCyclopedia Of DNA Elements (ENCODE) and Model Organism ENCyclopedia Of DNA Elements (modENCODE) projects has significantly increased the amount of transcription factor (TF) binding affinity information available to researchers. However, scientists still routinely use TF binding site (TFBS) search tools to scan unannotated sequences for TFBSs, particularly when searching for lesser-known TFs or TFs in organisms for which ChIP-seq data are unavailable. The sequence analysis often involves multiple steps such as TF model collection, promoter sequence retrieval, and visualization; thus, several different tools are required. We have developed a novel integrated web tool named LASAGNA-Search that allows users to perform TFBS searches without leaving the web site. LASAGNA-Search uses the LASAGNA (Length-Aware Site Alignment Guided by Nucleotide Association) algorithm for TFBS alignment. Important features of LASAGNA-Search include (i) acceptance of unaligned variable-length TFBSs, (ii) a collection of 1726 TF models, (iii) automatic promoter sequence retrieval, (iv) visualization in the UCSC Genome Browser, and (v) gene regulatory network inference and visualization based on binding specificities. LASAGNA-Search is freely available at http://biogrid.engr.uconn.edu/lasagna_search/. PMID:23599922

  11. In search of the emotional face: anger versus happiness superiority in visual search.

    PubMed

    Savage, Ruth A; Lipp, Ottmar V; Craig, Belinda M; Becker, Stefanie I; Horstmann, Gernot

    2013-08-01

    Previous research has provided inconsistent results regarding visual search for emotional faces, yielding evidence for either anger superiority (i.e., more efficient search for angry faces) or happiness superiority effects (i.e., more efficient search for happy faces), suggesting that these results do not reflect on emotional expression, but on emotion (un-)related low-level perceptual features. The present study investigated possible factors mediating anger/happiness superiority effects; specifically search strategy (fixed vs. variable target search; Experiment 1), stimulus choice (Nimstim database vs. Ekman & Friesen database; Experiments 1 and 2), and emotional intensity (Experiment 3 and 3a). Angry faces were found faster than happy faces regardless of search strategy using faces from the Nimstim database (Experiment 1). By contrast, a happiness superiority effect was evident in Experiment 2 when using faces from the Ekman and Friesen database. Experiment 3 employed angry, happy, and exuberant expressions (Nimstim database) and yielded anger and happiness superiority effects, respectively, highlighting the importance of the choice of stimulus materials. Ratings of the stimulus materials collected in Experiment 3a indicate that differences in perceived emotional intensity, pleasantness, or arousal do not account for differences in search efficiency. Across three studies, the current investigation indicates that prior reports of anger or happiness superiority effects in visual search are likely to reflect on low-level visual features associated with the stimulus materials used, rather than on emotion. PMID:23527503

  12. How do Interruptions Impact Nurses’ Visual Scanning Patterns When Using Barcode Medication Administration Systems?

    PubMed Central

    He, Ze; Marquard, Jenna L.; Henneman, Philip L.

    2014-01-01

    While barcode medication administration (BCMA) systems have the potential to reduce medication errors, they may introduce errors, side effects, and hazards into the medication administration process. Studies of BCMA systems should therefore consider the interrelated nature of health information technology (IT) use and sociotechnical systems. We aimed to understand how the introduction of interruptions into the BCMA process impacts nurses’ visual scanning patterns, a proxy for one component of cognitive processing. We used an eye tracker to record nurses’ visual scanning patterns while administering a medication using BCMA. Nurses either performed the BCMA process in a controlled setting with no interruptions (n=25) or in a real clinical setting with interruptions (n=21). By comparing the visual scanning patterns between the two groups, we found that nurses in the interruptive environment identified less task-related information in a given period of time, and engaged in more information searching than information processing. PMID:25954449

  13. Visual cluster analysis and pattern recognition methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    2001-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  14. Supporting the Process of Exploring and Interpreting Space–Time Multivariate Patterns: The Visual Inquiry Toolkit

    PubMed Central

    Chen, Jin; MacEachren, Alan M.; Guo, Diansheng

    2009-01-01

    While many data sets carry geographic and temporal references, our ability to analyze these datasets lags behind our ability to collect them because of the challenges posed by both data complexity and tool scalability issues. This study develops a visual analytics approach that leverages human expertise with visual, computational, and cartographic methods to support the application of visual analytics to relatively large spatio-temporal, multivariate data sets. We develop and apply a variety of methods for data clustering, pattern searching, information visualization, and synthesis. By combining both human and machine strengths, this approach has a better chance to discover novel, relevant, and potentially useful information that is difficult to detect by any of the methods used in isolation. We demonstrate the effectiveness of the approach by applying the Visual Inquiry Toolkit we developed to analyze a data set containing geographically referenced, time-varying and multivariate data for U.S. technology industries. PMID:19960096

  15. Searching for pulsars using image pattern recognition

    SciTech Connect

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Cohen, S.; Dartez, L. P.; Lunsford, G.; Martinez, J. G.; Mata, A.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Flanigan, J.; Rohr, M. E-mail: berndsen@phas.ubc.ca; and others

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ∼9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  16. Searching for Pulsars Using Image Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Zhu, W. W.; Berndsen, A.; Madsen, E. C.; Tan, M.; Stairs, I. H.; Brazier, A.; Lazarus, P.; Lynch, R.; Scholz, P.; Stovall, K.; Ransom, S. M.; Banaszak, S.; Biwer, C. M.; Cohen, S.; Dartez, L. P.; Flanigan, J.; Lunsford, G.; Martinez, J. G.; Mata, A.; Rohr, M.; Walker, A.; Allen, B.; Bhat, N. D. R.; Bogdanov, S.; Camilo, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Desvignes, G.; Ferdman, R. D.; Freire, P. C. C.; Hessels, J. W. T.; Jenet, F. A.; Kaplan, D. L.; Kaspi, V. M.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Venkataraman, A.

    2014-02-01

    In the modern era of big data, many fields of astronomy are generating huge volumes of data, the analysis of which can sometimes be the limiting factor in research. Fortunately, computer scientists have developed powerful data-mining techniques that can be applied to various fields. In this paper, we present a novel artificial intelligence (AI) program that identifies pulsars from recent surveys by using image pattern recognition with deep neural nets—the PICS (Pulsar Image-based Classification System) AI. The AI mimics human experts and distinguishes pulsars from noise and interference by looking for patterns from candidate plots. Different from other pulsar selection programs that search for expected patterns, the PICS AI is taught the salient features of different pulsars from a set of human-labeled candidates through machine learning. The training candidates are collected from the Pulsar Arecibo L-band Feed Array (PALFA) survey. The information from each pulsar candidate is synthesized in four diagnostic plots, which consist of image data with up to thousands of pixels. The AI takes these data from each candidate as its input and uses thousands of such candidates to train its ~9000 neurons. The deep neural networks in this AI system grant it superior ability to recognize various types of pulsars as well as their harmonic signals. The trained AI's performance has been validated with a large set of candidates from a different pulsar survey, the Green Bank North Celestial Cap survey. In this completely independent test, the PICS ranked 264 out of 277 pulsar-related candidates, including all 56 previously known pulsars and 208 of their harmonics, in the top 961 (1%) of 90,008 test candidates, missing only 13 harmonics. The first non-pulsar candidate appears at rank 187, following 45 pulsars and 141 harmonics. In other words, 100% of the pulsars were ranked in the top 1% of all candidates, while 80% were ranked higher than any noise or interference. The

  17. Visual search and the N2pc in children.

    PubMed

    Couperus, Jane W; Quirk, Colin

    2015-04-01

    While there is growing understanding of visual selective attention in children, some aspects such as selection in the presence of distractors are not well understood. Adult studies suggest that when presented with a visual search task, an enhanced negativity is seen beginning around 200 ms (the N2pc) that reflects selection of a target item among distractors. However, it is not known if similar selective attention-related activity is seen in children during visual search. This study was designed to investigate the presence of the N2pc in children. Nineteen children (ages 9-12 years) and 21 adults (ages 18-22 years) completed a visual search task in which they were asked to attend to a fixation surrounded by both a target and a distractor stimulus. Three types of displays were analyzed at parietal electrodes P7 and P8; lateral target/lateral distractor, lateral target/midline distractor, and midline target/lateral distractor. Both adults and children showed a significant increased negativity contralateral compared to ipsilateral to the target (reflected in the N2pc) in both displays with a lateral target while no such effect was seen in displays with a midline target. This suggests that children also utilized additional resources to select a target item when distractors are present. These findings demonstrate that the N2pc can be used as a marker of attentional object selection in children. PMID:25678274

  18. Animation of orthogonal texture patterns for vector field visualization.

    PubMed

    Bachthaler, Sven; Weiskopf, Daniel

    2008-01-01

    This paper introduces orthogonal vector field visualization on 2D manifolds: a representation by lines that are perpendicular to the input vector field. Line patterns are generated by line integral convolution (LIC). This visualization is combined with animation based on motion along the vector field. This decoupling of the line direction from the direction of animation allows us to choose the spatial frequencies along the direction of motion independently from the length scales along the LIC line patterns. Vision research indicates that local motion detectors are tuned to certain spatial frequencies of textures, and the above decoupling enables us to generate spatial frequencies optimized for motion perception. Furthermore, we introduce a combined visualization that employs orthogonal LIC patterns together with conventional, tangential streamline LIC patterns in order to benefit from the advantages of these two visualization approaches. In addition, a filtering process is described to achieve a consistent and temporally coherent animation of orthogonal vector field visualization. Different filter kernels and filter methods are compared and discussed in terms of visualization quality and speed. We present respective visualization algorithms for 2D planar vector fields and tangential vector fields on curved surfaces, and demonstrate that those algorithms lend themselves to efficient and interactive GPU implementations. PMID:18467751

  19. The Efficiency of a Visual Skills Training Program on Visual Search Performance

    PubMed Central

    Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech

    2015-01-01

    In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666

  20. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  1. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2011-12-06

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  2. Sequential pattern data mining and visualization

    DOEpatents

    Wong, Pak Chung; Jurrus, Elizabeth R.; Cowley, Wendy E.; Foote, Harlan P.; Thomas, James J.

    2009-05-26

    One or more processors (22) are operated to extract a number of different event identifiers therefrom. These processors (22) are further operable to determine a number a display locations each representative of one of the different identifiers and a corresponding time. The display locations are grouped into sets each corresponding to a different one of several event sequences (330a, 330b, 330c. 330d, 330e). An output is generated corresponding to a visualization (320) of the event sequences (330a, 330b, 330c, 330d, 330e).

  3. Visual working memory simultaneously guides facilitation and inhibition during visual search.

    PubMed

    Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem

    2016-07-01

    During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention. PMID:27055458

  4. Impact of patient photos on visual search during radiograph interpretation

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Applegate, Kimberly; DeSimone, Ariadne; Chung, Alex; Tridandanpani, Srini

    2016-03-01

    To increase detection of mislabeled medical imaging studies evidence shows it may be useful to include patient photographs during interpretation. This study examined how inclusion of photos impacts visual search. Ten radiologists viewed 21 chest radiographs with and without a photo of the patient while search was recorded. Their task was to note tube/line placement. Eye-tracking data revealed that presence of the photo reduced the number of fixations and total dwell on the chest image as a result of periodically looking at the photo. Average preference for having photos was 6.10 on 0-10 scale and neck and chest were preferred areas.

  5. Perspective: n-type oxide thermoelectrics via visual search strategies

    NASA Astrophysics Data System (ADS)

    Xing, Guangzong; Sun, Jifeng; Ong, Khuong P.; Fan, Xiaofeng; Zheng, Weitao; Singh, David J.

    2016-05-01

    We discuss and present search strategies for finding new thermoelectric compositions based on first principles electronic structure and transport calculations. We illustrate them by application to a search for potential n-type oxide thermoelectric materials. This includes a screen based on visualization of electronic energy isosurfaces. We report compounds that show potential as thermoelectric materials along with detailed properties, including SrTiO3, which is a known thermoelectric, and appropriately doped KNbO3 and rutile TiO2.

  6. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    ERIC Educational Resources Information Center

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

  7. Memory for Where, but Not What, Is Used during Visual Search

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Peterson, Matthew S.; Vomela, Miroslava

    2006-01-01

    Although the role of memory in visual search is debatable, most researchers agree with a limited-capacity model of memory in visual search. The authors demonstrate the role of memory by replicating previous findings showing that visual search is biased away from old items (previously examined items) and toward new items (nonexamined items).…

  8. Similarity preserving snippet-based visualization of web search results.

    PubMed

    Gomez-Nieto, Erick; San Roman, Frizzi; Pagliosa, Paulo; Casaca, Wallace; Helou, Elias S; de Oliveira, Maria Cristina F; Nonato, Luis Gustavo

    2014-03-01

    Internet users are very familiar with the results of a search query displayed as a ranked list of snippets. Each textual snippet shows a content summary of the referred document (or webpage) and a link to it. This display has many advantages, for example, it affords easy navigation and is straightforward to interpret. Nonetheless, any user of search engines could possibly report some experience of disappointment with this metaphor. Indeed, it has limitations in particular situations, as it fails to provide an overview of the document collection retrieved. Moreover, depending on the nature of the query--for example, it may be too general, or ambiguous, or ill expressed--the desired information may be poorly ranked, or results may contemplate varied topics. Several search tasks would be easier if users were shown an overview of the returned documents, organized so as to reflect how related they are, content wise. We propose a visualization technique to display the results of web queries aimed at overcoming such limitations. It combines the neighborhood preservation capability of multidimensional projections with the familiar snippet-based representation by employing a multidimensional projection to derive two-dimensional layouts of the query search results that preserve text similarity relations, or neighborhoods. Similarity is computed by applying the cosine similarity over a "bag-of-words" vector representation of collection built from the snippets. If the snippets are displayed directly according to the derived layout, they will overlap considerably, producing a poor visualization. We overcome this problem by defining an energy functional that considers both the overlapping among snippets and the preservation of the neighborhood structure as given in the projected layout. Minimizing this energy functional provides a neighborhood preserving two-dimensional arrangement of the textual snippets with minimum overlap. The resulting visualization conveys both a global

  9. Functional Connectivity Patterns of Visual Cortex Reflect its Anatomical Organization.

    PubMed

    Genç, Erhan; Schölvinck, Marieke Louise; Bergmann, Johanna; Singer, Wolf; Kohler, Axel

    2016-09-01

    The brain is continuously active, even without external input or task demands. This so-called resting-state activity exhibits a highly specific spatio-temporal organization. However, how exactly these activity patterns map onto the anatomical and functional architecture of the brain is still unclear. We addressed this question in the human visual cortex. We determined the representation of the visual field in visual cortical areas of 44 subjects using fMRI and examined resting-state correlations between these areas along the visual hierarchy, their dorsal and ventral segments, and between subregions representing foveal versus peripheral parts of the visual field. We found that retinotopically corresponding regions, particularly those representing peripheral visual fields, exhibit strong correlations. V1 displayed strong internal correlations between its dorsal and ventral segments and the highest correlation with LGN compared with other visual areas. In contrast, V2 and V3 showed weaker correlations with LGN and stronger between-area correlations, as well as with V4 and hMT+. Interhemispheric correlations between homologous areas were especially strong. These correlation patterns were robust over time and only marginally altered under task conditions. These results indicate that resting-state fMRI activity closely reflects the anatomical organization of the visual cortex both with respect to retinotopy and hierarchy. PMID:26271111

  10. Reading and Visual Search: A Developmental Study in Normal Children

    PubMed Central

    Seassau, Magali; Bucci, Maria-Pia

    2013-01-01

    Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15) and in a group of 10 adults (aged 24 to 39). The main findings are (i) in both tasks the number of progressive saccades (to the right) and regressive saccades (to the left) decreases with age; (ii) the amplitude of progressive saccades increases with age in the reading task only; (iii) in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv) in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v) children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence. PMID:23894627

  11. Top-down guidance in visual search for facial expressions.

    PubMed

    Hahn, Sowon; Gronlund, Scott D

    2007-02-01

    Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1 participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics. PMID:17546747

  12. Intertrial Temporal Contextual Cuing: Association across Successive Visual Search Trials Guides Spatial Attention

    ERIC Educational Resources Information Center

    Ono, Fuminori; Jiang, Yuhong; Kawahara, Jun-ichiro

    2005-01-01

    Contextual cuing refers to the facilitation of performance in visual search due to the repetition of the same displays. Whereas previous studies have focused on contextual cuing within single-search trials, this study tested whether 1 trial facilitates visual search of the next trial. Participants searched for a T among Ls. In the training phase,…

  13. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  14. Point-of-gaze analysis reveals visual search strategies

    NASA Astrophysics Data System (ADS)

    Rajashekar, Umesh; Cormack, Lawrence K.; Bovik, Alan C.

    2004-06-01

    Seemingly complex tasks like visual search can be analyzed using a cognition-free, bottom-up framework. We sought to reveal strategies used by observers in visual search tasks using accurate eye tracking and image analysis at point of gaze. Observers were instructed to search for simple geometric targets embedded in 1/f noise. By analyzing the stimulus at the point of gaze using the classification image (CI) paradigm, we discovered CI templates that indeed resembled the target. No such structure emerged for a random-searcher. We demonstrate, qualitatively and quantitatively, that these CI templates are useful in predicting stimulus regions that draw human fixations in search tasks. Filtering a 1/f noise stimulus with a CI results in a 'fixation prediction map'. A qualitative evaluation of the prediction was obtained by overlaying k-means clusters of observers' fixations on the prediction map. The fixations clustered around the local maxima in the prediction map. To obtain a quantitative comparison, we computed the Kullback-Leibler distance between the recorded fixations and the prediction. Using random-searcher CIs in Monte Carlo simulations, a distribution of this distance was obtained. The z-scores for the human CIs and the original target were -9.70 and -9.37 respectively indicating that even in noisy stimuli, observers deploy their fixations efficiently to likely targets rather than casting them randomly hoping to fortuitously find the target.

  15. Automatic guidance of attention during real-world visual search

    PubMed Central

    Seidl-Rathkopf, Katharina N.; Turk-Browne, Nicholas B.; Kastner, Sabine

    2015-01-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, as the features, locations, and times of appearance of relevant objects are often not known in advance. A mechanism by which attention is automatically biased toward information that is potentially relevant may thus be helpful. Here we tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of non-matching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  16. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge. PMID:26529685

  17. Visual search strategies and decision making in baseball batting.

    PubMed

    Takeuchi, Takayuki; Inomata, Kimihiro

    2009-06-01

    The goal was to examine the differences in visual search strategies between expert and nonexpert baseball batters during the preparatory phase of a pitcher's pitching and accuracy and timing of swing judgments during the ball's trajectory. 14 members of a college team (Expert group), and graduate and college students (Nonexpert group), were asked to observe 10 pitches thrown by a pitcher and respond by pushing a button attached to a bat when they thought the bat should be swung to meet the ball (swing judgment). Their eye movements, accuracy, and the timing of the swing judgment were measured. The Expert group shifted their point of observation from the proximal part of the body such as the head, chest, or trunk of the pitcher to the pitching arm and the release point before the pitcher released a ball, while the gaze point of the Nonexpert group visually focused on the head and the face. The accuracy in swing judgments of the Expert group was significantly higher, and the timing of their swing judgments was significantly earlier. Expert baseball batters used visual search strategies to gaze at specific cues (the pitching arm of the pitcher) and were more accurate and relatively quicker at decision making than Nonexpert batters. PMID:19725330

  18. Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG.

    PubMed

    Wardle, Susan G; Kriegeskorte, Nikolaus; Grootswagers, Tijl; Khaligh-Razavi, Seyed-Mahdi; Carlson, Thomas A

    2016-05-15

    Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity. PMID:26899210

  19. Differences between fovea and parafovea in visual search processes.

    PubMed

    Fiorentini, A

    1989-01-01

    Visual objects that differ from the surroundings for some simple feature, e.g. colour or line orientation, or for some shape parameters ("textons", Julez, 1986) are believed to be detected in parallel from different locations in the visual field without requiring a serial search process. Tachistoscopic presentations of textures were used to compare the time course of search processes in the fovea and parafovea. Detection of targets differing for a simple feature (line orientation or line crossings) from the surrounding elements was found to have a time course typical of parallel processing for coarse textures extending into the parafovea. For fine textures confined into the fovea the time course was suggestive of a serial search process even for these textons. These findings are consistent with the hypothesis that parallel processing of lines or crossings is subserved by a coarse network of detectors with relatively large receptive field and low resolution. For the counting of coloured spots in a background of a different colour the parafovea has the same time requirements as the fovea. PMID:2617862

  20. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  1. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    SciTech Connect

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.

  2. Visual Object Pattern Separation Deficits in Nondemented Older Adults

    ERIC Educational Resources Information Center

    Toner, Chelsea K.; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2009-01-01

    Young and nondemented older adults were tested on a continuous recognition memory task requiring visual pattern separation. During the task, some objects were repeated across trials and some objects, referred to as lures, were presented that were similar to previously presented objects. The lures resulted in increased interference and an increased…

  3. Discovering Visual Scanning Patterns in a Computerized Cancellation Test

    ERIC Educational Resources Information Center

    Huang, Ho-Chuan; Wang, Tsui-Ying

    2013-01-01

    The purpose of this study was to develop an attention sequential mining mechanism for investigating the sequential patterns of children's visual scanning process in a computerized cancellation test. Participants had to locate and cancel the target amongst other non-targets in a structured form, and a random form with Chinese stimuli. Twenty-three…

  4. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  5. The influence of cast shadows on visual search.

    PubMed

    Rensink, Ronald A; Cavanagh, Patrick

    2004-01-01

    We show that cast shadows can have a significant influence on the speed of visual search. In particular, we find that search based on the shape of a region is affected when the region is darker than the background and corresponds to a shadow formed by lighting from above. Results support the proposal that an early-level system rapidly identifies regions as shadows and then discounts them, making their shapes more difficult to access. Several constraints used by this system are mapped out, including constraints on the luminance and texture of the shadow region, and on the nature of the item casting the shadow. Among other things, this system is found to distinguish between line elements (items containing only edges) and surface elements (items containing visible surfaces), with only the latter deemed capable of casting a shadow. PMID:15693675

  6. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    ERIC Educational Resources Information Center

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  7. Response Selection in Visual Search: The Influence of Response Compatibility of Nontargets

    ERIC Educational Resources Information Center

    Starreveld, Peter A.; Theeuwes, Jan; Mortier, Karen

    2004-01-01

    The authors used visual search tasks in which components of the classic flanker task (B. A. Eriksen & C. W. Eriksen, 1974) were introduced. In several experiments the authors obtained evidence of parallel search for a target among distractor elements. Therefore, 2-stage models of visual search predict no effect of the identity of those…

  8. Early visual cortical responses produced by checkerboard pattern stimulation.

    PubMed

    Shigihara, Yoshihito; Hoshi, Hideyuki; Zeki, Semir

    2016-07-01

    Visual evoked potentials have been traditionally triggered with flash or reversing checkerboard stimuli and recorded with electroencephalographic techniques, largely but not exclusively in clinical or clinically related settings. They have been crucial in determining the healthy functioning or otherwise of the visual pathways up to and including the cerebral cortex. They have typically given early response latencies of 100ms, the source of which has been attributed to V1, with the prestriate cortex being secondarily activated somewhat later. On the other hand, magnetoencephalographic studies using stimuli better tailored to the physiology of individual, specialized, visual areas have given early latencies of <50ms with the sources localized in both striate (V1) and prestriate cortex. In this study, we used the reversing checkerboard pattern as a stimulus and recorded cortical visual evoked magnetic fields with magnetoencephalography, to establish whether very early responses can be traced to (estimated) in both striate and prestriate cortex, since such a demonstration would enhance considerably the power of this classical approach in clinical investigations. Our results show that cortical responses evoked by checkerboard patterns can be detected before 50ms post-stimulus onset and that their sources can be estimated in both striate and prestriate cortex, suggesting a strong parallel input from the sub-cortex to both striate and prestriate divisions of the visual cortex. PMID:27083528

  9. Retinal waves coordinate patterned activity throughout the developing visual system

    PubMed Central

    Ackman, James B.; Burbridge, Timothy J.; Crair, Michael C.

    2014-01-01

    Summary The morphologic and functional development of the vertebrate nervous system is initially governed by genetic factors and subsequently refined by neuronal activity. However, fundamental features of the nervous system emerge before sensory experience is possible. Thus, activity-dependent development occurring before the onset of experience must be driven by spontaneous activity, but the origin and nature of activity in vivo remains largely untested. Here we use optical methods to demonstrate in live neonatal mice that waves of spontaneous retinal activity are present and propagate throughout the entire visual system before eye opening. This patterned activity encompassed the visual field, relied on cholinergic neurotransmission, preferentially initiated in the binocular retina, and exhibited spatiotemporal correlations between the two hemispheres. Retinal waves were the primary source of activity in the midbrain and primary visual cortex, but only modulated ongoing activity in secondary visual areas. Thus, spontaneous retinal activity is transmitted through the entire visual system and carries patterned information capable of guiding the activity-dependent development of complex intra- and inter- hemispheric circuits before the onset of vision. PMID:23060192

  10. Pattern-visual evoked potentials in thinner abusers.

    PubMed

    Poblano, A; Lope Huerta, M; Martínez, J M; Falcón, H D

    1996-01-01

    Organic solvents cause injury to lipids of neuronal and glial membranes. A well known characteristic of workers exposed to thinner is optic neuropathy. We decided to look for neurophysiologic signs of visual damage in patients identified as thinner abusers. Pattern reversal visual evoked potentials was performed on 34 thinner abuser patients and 30 controls. P-100 wave latency was found to be longer on abuser than control subjects. Results show the possibility of central alterations on thinner abusers despite absence of clinical symptoms. PMID:8987190

  11. Perceptual animacy: visual search for chasing objects among distractors.

    PubMed

    Meyerhoff, Hauke S; Schwan, Stephan; Huff, Markus

    2014-04-01

    Anthropomorphic interactions such as chasing are an important cue to perceptual animacy. A recent study showed that the detection of interacting (e.g., chasing) stimuli follows the regularities of a serial visual search. In the present set of experiments, we explore several variants of the chasing detection paradigm in order to investigate how human observers recognize chasing objects among distractors although there are no distinctive visual features attached to individual objects. Our results indicate that even a spatially separated presentation of potentially chasing pairs of objects requires attention at least for object selection (Experiment 1). In the chasing detection framework, a chase among nonchases is easier to find than a nonchase among chases, suggesting that cues indicating the presence of a chase prevail during chasing detection (Experiment 2). Spatial proximity is one of these cues toward the presence of a chase because decreasing the distance between chasing objects leads to shorter detection latencies (Experiment 3). Finally, our results indicate that single objects provide the basis of chasing detection rather than pairs of objects. Participants would rather search for one object that is approaching any other object in the display than for a pair of objects involved in a chase (Experiments 4 and 5). Taken together, these results suggest that participants recognize a chase by detecting one object that is approaching any of the other objects in the display. PMID:24294872

  12. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms.

    PubMed

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H

    2015-06-29

    In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a "fractionable" phenotype model of autism. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves, and up to another 30% manifest elevated levels of autism symptoms. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI;), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS;). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  13. Electroencephalogram assessment of mental fatigue in visual search.

    PubMed

    Fan, Xiaoli; Zhou, Qianxiang; Liu, Zhongqi; Xie, Fang

    2015-01-01

    Mental fatigue is considered to be a contributing factor responsible for numerous road accidents and various medical conditions and the efficiency and performance could be impaired during fatigue. Hence, determining how to evaluate mental fatigue is very important. In the present study, ten subjects performed a long-term visual search task with electroencephalogram recorded, and self-assessment and reaction time (RT) were combined to verify if mental fatigue had been induced and were also used as confirmatory tests for the proposed measures. The changes in relative energy in four wavebands (δ,θ,α, and β), four ratio formulas [(α+θ)/β,α/β,(α+θ)/(α+β), and θ/β], and Shannon's entropy (SE) were compared and analyzed between the beginning and end of the task. The results showed that a significant increase occurred in alpha activity in the frontal, central, posterior temporal, parietal, and occipital lobes, and a dip occurred in the beta activity in the pre-frontal, inferior frontal, posterior temporal, and occipital lobes. The ratio formulas clearly increased in all of these brain regions except the temporal region, where only α/β changed obviously after finishing the 60-min visual search task. SE significantly increased in the posterior temporal, parietal, and occipital lobes. These results demonstrate some potential indicators for mental fatigue detection and evaluation, which can be applied in the future development of countermeasures to fatigue. PMID:26405908

  14. Searching for the right word: Hybrid visual and memory search for words

    PubMed Central

    Boettcher, Sage E. P.; Wolfe, Jeremy M.

    2016-01-01

    In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035

  15. Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets.

    PubMed

    Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2011-06-01

    Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., "meow"), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., "meow") of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., "meow") should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual

  16. Visual-search observers for SPECT simulations with clinical backgrounds

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2016-03-01

    The purpose of this work was to test the ability of visual-search (VS) model observers to predict the lesion- detection performance of human observers with hybrid SPECT images. These images consist of clinical back- grounds with simulated abnormalities. The application of existing scanning model observers to hybrid images is complicated by the need for extensive statistical information, whereas VS models based on separate search and analysis processes may operate with reduced knowledge. A localization ROC (LROC) study involved the detection and localization of solitary pulmonary nodules in Tc-99m lung images. The study was aimed at op- timizing the number of iterations and the postfiltering of four rescaled block-iterative reconstruction strategies. These strategies implemented different combinations of attenuation correction, scatter correction, and detector resolution correction. For a VS observer in this study, the search and analysis processes were guided by a single set of base morphological features derived from knowledge of the lesion profile. One base set used difference-of- Gaussian channels while a second base set implemented spatial derivatives in combination with the Burgess eye filter. A feature-adaptive VS observer selected features of interest for a given image set on the basis of training-set performance. A comparison of the feature-adaptive observer results against previously acquired human-observer data is presented.

  17. Patterns in the sky: Natural visualization of aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Campbell, James F.; Chambers, Joseph R.

    1994-01-01

    The objective of the current publication is to present the collection of flight photographs to illustrate the types of flow patterns that were visualized and to present qualitative correlations with computational and wind tunnel results. Initially in section 2, the condensation process is discussed, including a review of relative humidity, vapor pressure, and factors which determine the presence of visible condensate. Next, outputs from computer code calculations are postprocessed by using water-vapor relationships to determine if computed values of relative humidity in the local flow field correlate with the qualitative features of the in-flight condensation patterns. The photographs are then presented in section 3 by flow type and subsequently in section 4 by aircraft type to demonstrate the variety of condensed flow fields that was visualized for a wide range of aircraft and flight maneuvers.

  18. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-01-01

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations. PMID:24190911

  19. Relationships among balance, visual search, and lacrosse-shot accuracy.

    PubMed

    Marsh, Darrin W; Richard, Leon A; Verre, Arlene B; Myers, Jay

    2010-06-01

    The purpose of this study was to examine variables that may contribute to shot accuracy in women's college lacrosse. A convenience sample of 15 healthy women's National Collegiate Athletic Association Division III College lacrosse players aged 18-23 (mean+/-SD, 20.27+/-1.67) participated in the study. Four experimental variables were examined: balance, visual search, hand grip strength, and shoulder joint position sense. Balance was measured by the Biodex Stability System (BSS), and visual search was measured by the Trail-Making Test Part A (TMTA) and Trail-Making Test Part B (TMTB). Hand-grip strength was measured by a standard hand dynamometer, and shoulder joint position sense was measured using a modified inclinometer. All measures were taken in an indoor setting. These experimental variables were then compared with lacrosse-shot error that was measured indoors using a high-speed video camera recorder and a specialized L-shaped apparatus. A Stalker radar gun measured lacrosse-shot velocity. The mean lacrosse-shot error was 15.17 cm with a mean lacrosse-shot velocity of 17.14 m.s (38.35 mph). Lower scores on the BSS level 8 eyes open (BSS L8 E/O) test and TMTB were positively related to less lacrosse-shot error (r=0.760, p=0.011) and (r=0.519, p=0.048), respectively. Relations were not significant between lacrosse-shot error and grip strength (r=0.191, p = 0.496), lacrosse-shot error and BSS level 8 eyes closed (BSS L8 E/C) (r=0.501, p=0.102), lacrosse-shot error and BSS level 4 eyes open (BSS L4 E/O) (r=0.313, p=0.378), lacrosse-shot error and BSS level 4 eyes closed (BSS L4 E/C) (r=-0.029, p=0.936) lacrosse-shot error and shoulder joint position sense (r=-0.509, p=0.055) and between lacrosse-shot error and TMTA (r=0.375, p=0.168). The results reveal that greater levels of shot accuracy may be related to greater levels of visual search and balance ability in women college lacrosse athletes. PMID:20508452

  20. Characterization of Visual Scanning Patterns in Air Traffic Control

    PubMed Central

    McClung, Sarah N.; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  1. Characterization of Visual Scanning Patterns in Air Traffic Control.

    PubMed

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190

  2. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  3. Association and dissociation between detection and discrimination of objects of expertise: Evidence from visual search.

    PubMed

    Golan, Tal; Bentin, Shlomo; DeGutis, Joseph M; Robertson, Lynn C; Harel, Assaf

    2014-02-01

    Expertise in face recognition is characterized by high proficiency in distinguishing between individual faces. However, faces also enjoy an advantage at the early stage of basic-level detection, as demonstrated by efficient visual search for faces among nonface objects. In the present study, we asked (1) whether the face advantage in detection is a unique signature of face expertise, or whether it generalizes to other objects of expertise, and (2) whether expertise in face detection is intrinsically linked to expertise in face individuation. We compared how groups with varying degrees of object and face expertise (typical adults, developmental prosopagnosics [DP], and car experts) search for objects within and outside their domains of expertise (faces, cars, airplanes, and butterflies) among a variable set of object distractors. Across all three groups, search efficiency (indexed by reaction time slopes) was higher for faces and airplanes than for cars and butterflies. Notably, the search slope for car targets was considerably shallower in the car experts than in nonexperts. Although the mean face slope was slightly steeper among the DPs than in the other two groups, most of the DPs' search slopes were well within the normative range. This pattern of results suggests that expertise in object detection is indeed associated with expertise at the subordinate level, that it is not specific to faces, and that the two types of expertise are distinct facilities. We discuss the potential role of experience in bridging between low-level discriminative features and high-level naturalistic categories. PMID:24338355

  4. Pupil diameter reflects uncertainty in attentional selection during visual search

    PubMed Central

    Geng, Joy J.; Blumenfeld, Zachary; Tyson, Terence L.; Minzenberg, Michael J.

    2015-01-01

    Pupil diameter has long been used as a metric of cognitive processing. However, recent advances suggest that the cognitive sources of change in pupil size may reflect LC-NE function and the calculation of unexpected uncertainty in decision processes (Aston-Jones and Cohen, 2005; Yu and Dayan, 2005). In the current experiments, we explored the role of uncertainty in attentional selection on task-evoked changes in pupil diameter during visual search. We found that task-evoked changes in pupil diameter were related to uncertainty during attentional selection as measured by reaction time (RT) and performance accuracy (Experiments 1-2). Control analyses demonstrated that the results are unlikely to be due to error monitoring or response uncertainty. Our results suggest that pupil diameter can be used as an implicit metric of uncertainty in ongoing attentional selection requiring effortful control processes. PMID:26300759

  5. Enhanced Visual Search in Infancy Predicts Emerging Autism Symptoms

    PubMed Central

    Gliga, Teodora; Bedford, Rachael; Charman, Tony; Johnson, Mark H.; Baron-Cohen, Simon; Bolton, Patrick; Cheung, Celeste; Davies, Kim; Liew, Michelle; Fernandes, Janice; Gammer, Issy; Maris, Helen; Salomone, Erica; Pasco, Greg; Pickles, Andrew; Ribeiro, Helena; Tucker, Leslie

    2015-01-01

    Summary In addition to core symptoms, i.e., social interaction and communication difficulties and restricted and repetitive behaviors, autism is also characterized by aspects of superior perception [1]. One well-replicated finding is that of superior performance in visual search tasks, in which participants have to indicate the presence of an odd-one-out element among a number of foils [2–5]. Whether these aspects of superior perception contribute to the emergence of core autism symptoms remains debated [4, 6]. Perceptual and social interaction atypicalities could reflect co-expressed but biologically independent pathologies, as suggested by a “fractionable” phenotype model of autism [7]. A developmental test of this hypothesis is now made possible by longitudinal cohorts of infants at high risk, such as of younger siblings of children with autism spectrum disorder (ASD). Around 20% of younger siblings are diagnosed with autism themselves [8], and up to another 30% manifest elevated levels of autism symptoms [9]. We used eye tracking to measure spontaneous orienting to letter targets (O, S, V, and +) presented among distractors (the letter X; Figure 1). At 9 and 15 months, emerging autism symptoms were assessed using the Autism Observation Scale for Infants (AOSI; [10]), and at 2 years of age, they were assessed using the Autism Diagnostic Observation Schedule (ADOS; [11]). Enhanced visual search performance at 9 months predicted a higher level of autism symptoms at 15 months and at 2 years. Infant perceptual atypicalities are thus intrinsically linked to the emerging autism phenotype. PMID:26073135

  6. Visual Interactions Conform to Pattern Decorrelation in Multiple Cortical Areas

    PubMed Central

    Sharifian, Fariba; Nurminen, Lauri; Vanni, Simo

    2013-01-01

    Neural responses to visual stimuli are strongest in the classical receptive field, but they are also modulated by stimuli in a much wider region. In the primary visual cortex, physiological data and models suggest that such contextual modulation is mediated by recurrent interactions between cortical areas. Outside the primary visual cortex, imaging data has shown qualitatively similar interactions. However, whether the mechanisms underlying these effects are similar in different areas has remained unclear. Here, we found that the blood oxygenation level dependent (BOLD) signal spreads over considerable cortical distances in the primary visual cortex, further than the classical receptive field. This indicates that the synaptic activity induced by a given stimulus occurs in a surprisingly extensive network. Correspondingly, we found suppressive and facilitative interactions far from the maximum retinotopic response. Next, we characterized the relationship between contextual modulation and correlation between two spatial activation patterns. Regardless of the functional area or retinotopic eccentricity, higher correlation between the center and surround response patterns was associated with stronger suppressive interaction. In individual voxels, suppressive interaction was predominant when the center and surround stimuli produced BOLD signals with the same sign. Facilitative interaction dominated in the voxels with opposite BOLD signal signs. Our data was in unison with recently published cortical decorrelation model, and was validated against alternative models, separately in different eccentricities and functional areas. Our study provides evidence that spatial interactions among neural populations involve decorrelation of macroscopic neural activation patterns, and suggests that the basic design of the cerebral cortex houses a robust decorrelation mechanism for afferent synaptic input. PMID:23874491

  7. Spatial and temporal dynamics of visual search tasks distinguish subtypes of unilateral spatial neglect: Comparison of two cases with viewer-centered and stimulus-centered neglect.

    PubMed

    Mizuno, Katsuhiro; Kato, Kenji; Tsuji, Tetsuya; Shindo, Keiichiro; Kobayashi, Yukiko; Liu, Meigen

    2016-08-01

    We developed a computerised test to evaluate unilateral spatial neglect (USN) using a touchscreen display, and estimated the spatial and temporal patterns of visual search in USN patients. The results between a viewer-centered USN patient and a stimulus-centered USN patient were compared. Two right-brain-damaged patients with USN, a patient without USN, and 16 healthy subjects performed a simple cancellation test, the circle test, a visuomotor search test, and a visual search test. According to the results of the circle test, one USN patient had stimulus-centered neglect and a one had viewer-centered neglect. The spatial and temporal patterns of these two USN patients were compared. The spatial and temporal patterns of cancellation were different in the stimulus-centered USN patient and the viewer-centered USN patient. The viewer-centered USN patient completed the simple cancellation task, but paused when transferring from the right side to the left side of the display. Unexpectedly, this patient did not exhibit rightward attention bias on the visuomotor and visual search tests, but the stimulus-centered USN patient did. The computer-based assessment system provided information on the dynamic visual search strategy of patients with USN. The spatial and temporal pattern of cancellation and visual search were different across the two patients with different subtypes of neglect. PMID:26059555

  8. Role of computer-assisted visual search in mammographic interpretation

    NASA Astrophysics Data System (ADS)

    Nodine, Calvin F.; Kundel, Harold L.; Mello-Thoms, Claudia; Weinstein, Susan P.

    2001-06-01

    We used eye-position data to develop Computer-Assisted Visual Search (CAVS) as an aid to mammographic interpretation. CAVS feeds back regions of interest that receive prolonged visual dwell (greater than or equal to 1000 ms) by highlighting them on the mammogram. These regions are then reevaluated for possible missed breast cancers. Six radiology residents and fellows interpreted a test set of 40 mammograms twice, once with CAVS feedback (FB), and once without CAVS FB in a crossover, repeated- measures design. Eye position was monitored. LROC performance (area) was compared with and without CAVS FB. Detection and localization of malignant lesions improved 12% with CAVS FB. This was not significant. The test set contained subtle malignant lesions. 65% (176/272) of true lesions were fixated. Of those fixated, 49% (87/176) received prolonged attention resulting in CAVS FB, and 54% (47/87) of FBs resulted in TPs. Test-set difficulty and the lack of reading experience of the readers may have contributed to the relatively low overall performance, and may have also limited the effectiveness of CAVS FB which could only play a role in localizing potential lesions if the reader fixated and dwelled on them.

  9. Expectations developed over multiple timescales facilitate visual search performance

    PubMed Central

    Gekas, Nikos; Seitz, Aaron R.; Seriès, Peggy

    2015-01-01

    Our perception of the world is strongly influenced by our expectations, and a question of key importance is how the visual system develops and updates its expectations through interaction with the environment. We used a visual search task to investigate how expectations of different timescales (from the last few trials to hours to long-term statistics of natural scenes) interact to alter perception. We presented human observers with low-contrast white dots at 12 possible locations equally spaced on a circle, and we asked them to simultaneously identify the presence and location of the dots while manipulating their expectations by presenting stimuli at some locations more frequently than others. Our findings suggest that there are strong acuity differences between absolute target locations (e.g., horizontal vs. vertical) and preexisting long-term biases influencing observers' detection and localization performance, respectively. On top of these, subjects quickly learned about the stimulus distribution, which improved their detection performance but caused increased false alarms at the most frequently presented stimulus locations. Recent exposure to a stimulus resulted in significantly improved detection performance and significantly more false alarms but only at locations at which it was more probable that a stimulus would be presented. Our results can be modeled and understood within a Bayesian framework in terms of a near-optimal integration of sensory evidence with rapidly learned statistical priors, which are skewed toward the very recent history of trials and may help understanding the time scale of developing expectations at the neural level. PMID:26200891

  10. GEON Developments for Searching, Accessing, and Visualizing Distributed Data

    NASA Astrophysics Data System (ADS)

    Meertens, C.; Seber, D.; Baru, C.; Wright, M.

    2005-12-01

    The NSF-funded GEON (Geosciences Network) Information Technology Research project is developing data sharing frameworks, a registry for distributed databases, concept-based search mechanisms, advanced visualization software, and grid-computing resources for earth science and education applications. The goal of this project is to enable new interdisciplinary research in the geosciences, while extending the access to data and complex modeling tools from the hands of a few researchers to a much broader set of scientific and educational users. To facilitate this, the GEON team of IT scientists, geoscientists, and educators and their collaborators are creating a capable Cyberinfrastructure that is based on grid/web services operating in a distributed environment. We are using a best practices approach that is designed to provide useful and usable capabilities and tools. With the realization of new large scale projects such as EarthScope that involve the collection, analysis, and modeling of vast quantities of diverse data, it is increasingly important to be able to effectively handle, model, and integrate a wide range of multi-dimensional, multi-parameter, and time dependent data in a timely fashion. GEON has been developing a process where the user can discover, access, retrieve and visualize data that is hosted either at GEON or at distributed servers. Whenever possible, GEON is using established protocols and formats for data and metadata exchange that are based on community efforts such as OPeNDAP, the Open GIS Consortium, Grid Computing, and digital libraries. This approach is essential to help overcome the challenges of dealing with heterogeneous distributed data and increases the possibility of data interoperability. We give an overview of resources that are now available to access and visualize a variety of geological and geophysical data, derived products and models including GPS data, GPS-derived velocity vectors and strain rates, earthquakes, three

  11. CiteRivers: Visual Analytics of Citation Patterns.

    PubMed

    Heimerl, Florian; Han, Qi; Koch, Steffen; Ertl, Thomas

    2016-01-01

    The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback. PMID:26529699

  12. Overlapping multivoxel patterns for two levels of visual expectation

    PubMed Central

    de Gardelle, Vincent; Stokes, Mark; Johnen, Vanessa M.; Wyart, Valentin; Summerfield, Christopher

    2013-01-01

    According to predictive accounts of perception, visual cortical regions encode sensory expectations about the external world, and the violation of those expectations by inputs (surprise). Here, using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data, we asked whether expectations and surprise activate the same pattern of voxels, in face-sensitive regions of the extra-striate visual cortex (the fusiform face area or FFA). Participants viewed pairs of repeating or alternating faces, with high or low probability of repetitions. As in previous studies, we found that repetition suppression (the attenuated BOLD response to repeated stimuli) in the FFA was more pronounced for probable repetitions, consistent with it reflecting reduced surprise to anticipated inputs. Secondly, we observed that repetition suppression and repetition enhancement responses were both consistent across scanner runs, suggesting that both have functional significance, with repetition enhancement possibly indicating the build up of sensory expectation. Critically, we also report that multi-voxels patterns associated with probability and repetition effects were significantly correlated within the left FFA. We argue that repetition enhancement responses and repetition probability effects can be seen as two types of expectation signals, occurring simultaneously, although at different processing levels (lower vs. higher), and different time scales (immediate vs. long term). PMID:23630488

  13. Preemption Effects in Visual Search: Evidence for Low-Level Grouping.

    ERIC Educational Resources Information Center

    Rensink, Ronald A.; Enns, James T.

    1995-01-01

    Eight experiments, each with 10 observers in each condition, show that the visual search for Mueller-Lyer stimuli is based on complete configurations rather than component segments with preemption by low-level groups. Results support the view that rapid visual search can only access higher level, more ecologically relevant structures. (SLD)

  14. Toddlers with Autism Spectrum Disorder Are More Successful at Visual Search than Typically Developing Toddlers

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Kraper, Catherine; Carter, Alice S.; Blaser, Erik

    2011-01-01

    Plaisted, O'Riordan and colleagues (Plaisted, O'Riordan & Baron-Cohen, 1998; O'Riordan, 2004) showed that school-age children and adults with Autism Spectrum Disorder (ASD) are faster at finding targets in certain types of visual search tasks than typical controls. Currently though, there is very little known about the visual search skills of very…

  15. Is There a Limit to the Superiority of Individuals with ASD in Visual Search?

    ERIC Educational Resources Information Center

    Hessels, Roy S.; Hooge, Ignace T. C.; Snijders, Tineke M.; Kemner, Chantal

    2014-01-01

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an…

  16. Visual Search Revived: The Slopes Are Not That Slippery: A Reply to Kristjansson (2015)

    PubMed Central

    2016-01-01

    Kristjansson (2015) suggests that standard research methods in the study of visual search should be “reconsidered.” He reiterates a useful warning against treating reaction time x set size functions as simple metrics that can be used to label search tasks as “serial” or “parallel.” However, I argue that he goes too far with a broad attack on the use of slopes in the study of visual search. Used wisely, slopes do provide us with insight into the mechanisms of visual search. PMID:27433330

  17. The effect of search condition and advertising type on visual attention to Internet advertising.

    PubMed

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness. PMID:20973730

  18. Roughness determination by direct visual observation of the speckle pattern

    NASA Astrophysics Data System (ADS)

    Rebollo, M. A.; Landau, M. R.; Hogert, E. N.; Gaggioli, N. G.; Muramatsu, M.

    1995-12-01

    There are mechanical and optical methods of measuring the roughness of surfaces. Mechanical methods are of a destructive type, while optical methods, although they are non-destructive, involve relatively complex systems and calculations. In this work a simple method is introduced, which allows one—through the direct observation of the speckle pattern—to make a visual correlation, comparing the first pattern with others obtained when the beam incidence angle varies. With this method it is possible to obtain results with acceptable accuracy for many industrial uses.

  19. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  20. Visual-auditory integration for visual search: a behavioral study in barn owls.

    PubMed

    Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID

  1. Visualizing Neuronal Network Connectivity with Connectivity Pattern Tables

    PubMed Central

    Nordlie, Eilen; Plesser, Hans Ekkehard

    2009-01-01

    Complex ideas are best conveyed through well-designed illustrations. Up to now, computational neuroscientists have mostly relied on box-and-arrow diagrams of even complex neuronal networks, often using ad hoc notations with conflicting use of symbols from paper to paper. This significantly impedes the communication of ideas in neuronal network modeling. We present here Connectivity Pattern Tables (CPTs) as a clutter-free visualization of connectivity in large neuronal networks containing two-dimensional populations of neurons. CPTs can be generated automatically from the same script code used to create the actual network in the NEST simulator. Through aggregation, CPTs can be viewed at different levels, providing either full detail or summary information. We also provide the open source ConnPlotter tool as a means to create connectivity pattern tables. PMID:20140265

  2. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  3. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance. PMID:24051777

  4. Polygon cluster pattern recognition based on new visual distance

    NASA Astrophysics Data System (ADS)

    Shuai, Yun; Shuai, Haiyan; Ni, Lin

    2007-06-01

    The pattern recognition of polygon clusters is a most attention-getting problem in spatial data mining. The paper carries through a research on this problem, based on spatial cognition principle and visual recognition Gestalt principle combining with spatial clustering method, and creates two innovations: First, the paper carries through a great improvement to the concept---"visual distance". In the definition of this concept, not only are Euclid's Distance, orientation difference and dimension discrepancy comprehensively thought out, but also is "similarity degree of object shape" crucially considered. In the calculation of "visual distance", the distance calculation model is built using Delaunay Triangulation geometrical structure. Second, the research adopts spatial clustering analysis based on MST Tree. In the design of pruning algorithm, the study initiates data automatism delamination mechanism and introduces Simulated Annealing Optimization Algorithm. This study provides a new research thread for GIS development, namely, GIS is an intersection principle, whose research method should be open and diverse. Any mature technology of other relative principles can be introduced into the study of GIS, but, they need to be improved on technical measures according to the principles of GIS as "spatial cognition science". Only to do this, can GIS develop forward on a higher and stronger plane.

  5. Pattern Visual Evoked Potentials in Dyslexic versus Normal Children

    PubMed Central

    Heravian, Javad; Sobhani-Rad, Davood; Lari, Samaneh; Khoshsima, Mohamadjavad; Azimi, Abbas; Ostadimoghaddam, Hadi; Yekta, Abbasali; Hoseini-Yazdi, Seyed Hosein

    2015-01-01

    Purpose: Presence of neurophysiological abnormalities in dyslexia has been a conflicting issue. This study was performed to evaluate the role of sensory visual deficits in the pathogenesis of dyslexia. Methods: Pattern visual evoked potentials (PVEP) were recorded in 72 children including 36 children with dyslexia and 36 children without dyslexia (controls) who were matched for age, sex and intelligence. Two check sizes of 15 and 60 min of arc were used with temporal frequencies of 1.5 Hz for transient and 6 Hz for steady-state methods. Results: Mean latency and amplitude values for 15 min arc and 60 min arc check sizes using steady state and transient methods showed no significant difference between the two study groups (P values: 0.139/0.481/0.356/0.062). Furthermore, no significant difference was observed between two methods of PVEPs in dyslexic and normal children using 60 min arc with high contrast (P values: 0.116, 0.402, 0.343 and 0.106). Conclusion: The sensitivity of PVEP has high validity to detect visual deficits in children with dyslexic problem. However, no significant difference was found between dyslexia and normal children using high contrast stimuli. PMID:26730313

  6. Bicycle accidents and drivers' visual search at left and right turns.

    PubMed

    Summala, H; Pasanen, E; Räsänen, M; Sievänen, J

    1996-03-01

    The accident data base of the City of Helsinki shows that when drivers cross a cycle path as they enter a non-signalized intersection, the clearly dominant type of car-cycle crashes is that in which a cyclist comes from the right and the driver is turning right, in marked contrast to the cases with drivers turning left (Pasanen 1992; City of Helsinki, Traffic Planning Department, Report L4). This study first tested an explanation that drivers turning right simply focus their attention on the cars coming from the left-those coming from the right posing no threat to them-and fail to see the cyclist from the right early enough. Drivers' scanning behavior was studied at two T-intersections. Two well-hidden video cameras were used, one to measure the head movements of the approaching drivers and the other one to measure speed and distance from the cycle crossroad. The results supported the hypothesis: the drivers turning right scanned the right leg of the T-intersection less frequently and later than those turning left. Thus, it appears that drivers develop a visual scanning strategy which concentrates on detection of more frequent and major dangers but ignores and may even mask visual information on less frequent dangers. The second part of the study evaluated different countermeasures, including speed humps, in terms of drivers' visual search behavior. The results suggested that speed-reducing countermeasures changed drivers' visual search patterns in favor of the cyclists coming from the right, presumably at least in part due to the fact that drivers were simply provided with more time to focus on each direction. PMID:8703272

  7. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  8. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    ERIC Educational Resources Information Center

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  9. Visual Search Is Postponed during the Attentional Blink until the System Is Suitably Reconfigured

    ERIC Educational Resources Information Center

    Ghorashi, S. M. Shahab; Smilek, Daniel; Di Lollo, Vincent

    2007-01-01

    J. S. Joseph, M. M. Chun, and K. Nakayama (1997) found that pop-out visual search was impaired as a function of intertarget lag in an attentional blink (AB) paradigm in which the 1st target was a letter and the 2nd target was a search display. In 4 experiments, the present authors tested the implication that search efficiency should be similarly…

  10. Animating streamlines with repeated asymmetric patterns for steady flow visualization

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee

    2012-01-01

    Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.

  11. Dynamic Analysis and Pattern Visualization of Forest Fires

    PubMed Central

    Lopes, António M.; Tenreiro Machado, J. A.

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns. PMID:25137393

  12. Electrophysiological measurement of information flow during visual search

    PubMed Central

    Cosman, Joshua D.; Arita, Jason T.; Ianni, Julianna D.; Woodman, Geoffrey F.

    2016-01-01

    The temporal relationship between different stages of cognitive processing is long-debated. This debate is ongoing, primarily because it is often difficult to measure the time course of multiple cognitive processes simultaneously. We employed a manipulation that allowed us to isolate ERP components related to perceptual processing, working memory, and response preparation, and then examined the temporal relationship between these components while observers performed a visual search task. We found that when response speed and accuracy were equally stressed, our index of perceptual processing ended before both the transfer of information into working memory and response preparation began. However, when we stressed speed over accuracy response preparation began before the completion of perceptual processing or transfer of information into working memory on trials with the fastest reaction times. These findings show that individuals can control the flow of information transmission between stages, either waiting for perceptual processing to be completed before preparing a response or configuring these stages to overlap in time. PMID:26669285

  13. Exploiting visual search theory to infer social interactions

    NASA Astrophysics Data System (ADS)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  14. Immaturity of the Oculomotor Saccade and Vergence Interaction in Dyslexic Children: Evidence from a Reading and Visual Search Study

    PubMed Central

    Bucci, Maria Pia; Nassibi, Naziha; Gerard, Christophe-Loic; Bui-Quoc, Emmanuel; Seassau, Magali

    2012-01-01

    Studies comparing binocular eye movements during reading and visual search in dyslexic children are, at our knowledge, inexistent. In the present study we examined ocular motor characteristics in dyslexic children versus two groups of non dyslexic children with chronological/reading age-matched. Binocular eye movements were recorded by an infrared system (mobileEBT®, e(ye)BRAIN) in twelve dyslexic children (mean age 11 years old) and a group of chronological age-matched (N = 9) and reading age-matched (N = 10) non dyslexic children. Two visual tasks were used: text reading and visual search. Independently of the task, the ocular motor behavior in dyslexic children is similar to those reported in reading age-matched non dyslexic children: many and longer fixations as well as poor quality of binocular coordination during and after the saccades. In contrast, chronological age-matched non dyslexic children showed a small number of fixations and short duration of fixations in reading task with respect to visual search task; furthermore their saccades were well yoked in both tasks. The atypical eye movement's patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an immaturity of the ocular motor saccade and vergence systems interaction. PMID:22438934

  15. Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization

    SciTech Connect

    HART,WILLIAM E.

    2000-06-01

    The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.

  16. Visual pattern recognition network: its training algorithm and its optoelectronic architecture

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Liu, Liren

    1996-07-01

    A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant pattern recognition with respect to image translation and size projection. After supervised learning takes place, the visual network extracts image features and classifies patterns much the same as living beings do. Moreover we set up its optoelectronic architecture for real-time pattern recognition.

  17. Prediction of shot success for basketball free throws: visual search strategy.

    PubMed

    Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

    2014-01-01

    In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed. PMID:24319995

  18. Similarity and heterogeneity effects in visual search are mediated by "segmentability".

    PubMed

    Utochkin, Igor S; Yurevich, Maria A

    2016-07-01

    The heterogeneity of our visual environment typically reduces the speed with which a singleton target can be found. Visual search theories explain this phenomenon via nontarget similarities and dissimilarities that affect grouping, perceptual noise, and so forth. In this study, we show that increasing the heterogeneity of a display can facilitate rather than inhibit visual search for size and orientation singletons when heterogeneous features smoothly fill the transition between highly distinguishable nontargets. We suggest that this smooth transition reduces the "segmentability" of dissimilar items to otherwise separate subsets, causing the visual system to treat them as a near-homogenous set standing apart from a singleton. (PsycINFO Database Record PMID:26784002

  19. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  20. Dementia alters standing postural adaptation during a visual search task in older adult men

    PubMed Central

    Joŕdan, Azizah J.; McCarten, J. Riley; Rottunda, Susan; Stoffregen, Thomas A.; Manor, Brad; Wade, Michael G.

    2015-01-01

    This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance—in the non-dementia group only—suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus appears to disrupt this perception-action synergy. PMID:25770830

  1. Threat modulation of visual search efficiency in PTSD: A comparison of distinct stimulus categories.

    PubMed

    Olatunji, Bunmi O; Armstrong, Thomas; Bilsky, Sarah A; Zhao, Mimi

    2015-10-30

    Although an attentional bias for threat has been implicated in posttraumatic stress disorder (PTSD), the cues that best facilitate this bias are unclear. Some studies utilize images and others utilize facial expressions that communicate threat. However, the comparability of these two types of stimuli in PTSD is unclear. The present study contrasted the effects of images and expressions with the same valence on visual search among veterans with PTSD and controls. Overall, PTSD patients had slower visual search speed than controls. Images caused greater disruption in visual search than expressions, and emotional content modulated this effect with larger differences between images and expressions arising for more negatively valenced stimuli. However, this effect was not observed with the maximum number of items in the search array. Differences in visual search speed by images and expressions significantly varied between PTSD patients and controls for only anger and at the moderate level of task difficulty. Specifically, visual search speed did not significantly differ between PTSD patients and controls when exposed to angry expressions. However, PTSD patients displayed significantly slower visual search than controls when exposed to anger images. The implications of these findings for better understanding emotion modulated attention in PTSD are discussed. PMID:26254798

  2. Visual Iconic Patterns of Instant Messaging: Steps Towards Understanding Visual Conversations

    NASA Astrophysics Data System (ADS)

    Bays, Hillary

    An Instant Messaging (IM) conversation is a dynamic communication register made up of text, images, animation and sound played out on a screen with potentially several parallel conversations and activities all within a physical environment. This article first examines how best to capture this unique gestalt using in situ recording techniques (video, screen capture, XML logs) which highlight the micro-phenomenal level of the exchange and the macro-social level of the interaction. Of particular interest are smileys first as cultural artifacts in CMC in general then as linguistic markers. A brief taxonomy of these markers is proposed in an attempt to clarify their frequency and patterns of their use. Then, focus is placed on their importance as perceptual cues which facilitate communication, while also serving as emotive and emphatic functional markers. We try to demonstrate that the use of smileys and animation is not arbitrary but an organized interactional and structured practice. Finally, we discuss how the study of visual markers in IM could inform the study of other visual conversation codes, such as sign languages, which also have co-produced, physical behavior, suggesting the possibility of a visual phonology.

  3. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  4. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  5. Pattern visual evoked potentials in the assessment of objective visual acuity in amblyopic children.

    PubMed

    Gundogan, Fatih C; Mutlu, Fatih M; Altinsoy, H Ibrahim; Tas, Ahmet; Oz, Oguzhan; Sobaci, Gungor

    2010-08-01

    The aim of this study was to determine the value of pattern visual evoked potentials (PVEP) to five consecutive check size patterns in the assessment of visual acuity (VA) in children. One hundred unilateral amblyopic (study group) and 90 healthy children with best-corrected visual acuity (BCVA) of 1.0 (control group) were planned to be included. PVEP responses to five consecutive check sizes (2 degrees , 1 degrees , 30', 15', and 7') which are assumed to correspond to VAs of 0.1, 0.2, 0.4, 0.7 and 1.0 Snellen lines were recorded in both groups. Eighty-five children in the study group (85.0%) and 74 children in the control group (82.2%) who cooperated well with PVEP testing were included. Normal values for latency, amplitude, and normalized interocular amplitude/latency difference in each check size were defined in the control group. PVEP-estimated VA (PVEP-VA) in the amblyopic eye was defined by the normal PVEP responses to the smallest check size associated with normal interocular difference from the non-amblyopic eye, and was considered predictive if it is within +/-1 Snellen line (1 decimal) discrepancy with BCVA in that eye. Mean age was 9.7 +/- 1.9 and 9.9 +/- 2.2 years in the study and the control groups, respectively. LogMAR (logarithm of minimum angle of resolution) Snellen acuity was well correlated with the logMAR PVEP-VA (r = 0.525, P < 0.001) in the study group. The Snellen line discrepancy between BCVA and PVEP-VA was within +/-1 Snellen line in 57.6% of the eyes. PVEP to five consecutive check sizes may predict objective VA in amblyopic children. PMID:20376691

  6. Strategies of the honeybee Apis mellifera during visual search for vertical targets presented at various heights: a role for spatial attention?

    PubMed Central

    Morawetz, Linde; Chittka, Lars; Spaethe, Johannes

    2014-01-01

    When honeybees are presented with a colour discrimination task, they tend to choose swiftly and accurately when objects are presented in the ventral part of their frontal visual field. In contrast, poor performance is observed when objects appear in the dorsal part. Here we investigate if this asymmetry is caused by fixed search patterns or if bees can use alternative search mechanisms such as spatial attention, which allows flexible focusing on different areas of the visual field. We asked individual honeybees to choose an orange rewarded target among blue distractors. Target and distractors were presented in the ventral visual field, the dorsal field or both. Bees presented with targets in the ventral visual field consistently had the highest search efficiency, with rapid decisions, high accuracy and direct flight paths. In contrast, search performance for dorsally located targets was inaccurate and slow at the beginning of the test phase, but bees increased their search performance significantly after a few learning trials: they found the target faster, made fewer errors and flew in a straight line towards the target. However, bees needed thrice as long to improve the search for a dorsally located target when the target’s position changed randomly between the ventral and the dorsal visual field. We propose that honeybees form expectations of the location of the target’s appearance and adapt their search strategy accordingly. Different possible mechanisms of this behavioural adaptation are discussed. PMID:25254109

  7. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    NASA Astrophysics Data System (ADS)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  8. Generalized Pattern Search Algorithm for Peptide Structure Prediction

    PubMed Central

    Nicosia, Giuseppe; Stracquadanio, Giovanni

    2008-01-01

    Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293

  9. Individual differences in visual search: relationship to autistic traits, discrimination thresholds, and speed of processing.

    PubMed

    Brock, Jon; Xu, Jing Y; Brooks, Kevin R

    2011-01-01

    Enhanced visual search is widely reported in autism. Here we note a similar advantage for university students self-reporting higher levels of autism-like traits. Contrary to prevailing theories of autism, performance was not associated with perceptual-discrimination thresholds for the same stimuli, but was associated with inspection-time threshold--a measure of speed of perceptual processing. Enhanced visual search in autism may, therefore, at least partially be explained by faster speed of processing. PMID:21936301

  10. Development of a flow visualization apparatus. [to study convection flow patterns

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.

    1975-01-01

    The use of an optical flow visualization device for studying convection flow patterns was investigated. The investigation considered use of a shadowgraph, schlieren and other means for visualizing the flow. A laboratory model was set up to provide data on the proper optics and photography procedures to best visualize the flow. A preliminary design of a flow visualization system is provided as a result of the study. Recommendations are given for a flight test program utilizing the flow visualization apparatus.

  11. Locally-adaptive and memetic evolutionary pattern search algorithms.

    PubMed

    Hart, William E

    2003-01-01

    Recent convergence analyses of evolutionary pattern search algorithms (EPSAs) have shown that these methods have a weak stationary point convergence theory for a broad class of unconstrained and linearly constrained problems. This paper describes how the convergence theory for EPSAs can be adapted to allow each individual in a population to have its own mutation step length (similar to the design of evolutionary programing and evolution strategies algorithms). These are called locally-adaptive EPSAs (LA-EPSAs) since each individual's mutation step length is independently adapted in different local neighborhoods. The paper also describes a variety of standard formulations of evolutionary algorithms that can be used for LA-EPSAs. Further, it is shown how this convergence theory can be applied to memetic EPSAs, which use local search to refine points within each iteration. PMID:12804096

  12. Generalized pattern search algorithms with adaptive precision function evaluations

    SciTech Connect

    Polak, Elijah; Wetter, Michael

    2003-05-14

    In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

  13. Person perception informs understanding of cognition during visual search.

    PubMed

    Brennan, Allison A; Watson, Marcus R; Kingstone, Alan; Enns, James T

    2011-08-01

    Does person perception--the impressions we form from watching others--hold clues to the mental states of people engaged in cognitive tasks? We investigated this with a two-phase method: In Phase 1, participants searched on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2, other participants rated the searchers' video-recorded behavior. The results showed that blind raters are sensitive to individual differences in search proficiency and search strategy, as well as to environmental factors affecting search difficulty. Also, different behaviors were linked to search success in each setting: Eye movement frequency predicted successful search on a computer screen; head movement frequency predicted search success in an office. In both settings, an active search strategy and positive emotional expressions were linked to search success. These data indicate that person perception informs cognition beyond the scope of performance measures, offering the potential for new measurements of cognition that are both rich and unobtrusive. PMID:21626239

  14. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  15. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  16. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  17. Learning by Selection: Visual Search and Object Perception in Young Infants

    ERIC Educational Resources Information Center

    Amso, Dima; Johnson, Scott P.

    2006-01-01

    The authors examined how visual selection mechanisms may relate to developing cognitive functions in infancy. Twenty-two 3-month-old infants were tested in 2 tasks on the same day: perceptual completion and visual search. In the perceptual completion task, infants were habituated to a partly occluded moving rod and subsequently presented with …

  18. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  19. The preview benefit in single-feature and conjunction search: Constraints of visual marking.

    PubMed

    Meinhardt, Günter; Persike, Malte

    2015-01-01

    Previewing distracters enhances the efficiency of visual search. Watson and Humphreys (1997) proposed that the preview benefit rests on visual marking, a mechanism which actively encodes distracter locations at preview and inhibits them afterwards at search. As Watson and Humphreys did, we used a letter-color search task to study constraints of visual marking in conjunction search and near-efficient single-feature search with single-colored and homogeneous distracter letters. Search performance was measured for fixed target and distracter features (block design) and for randomly changed features across trials (random design). In single-feature search there was a full preview benefit for both block and random designs. In conjunction search a full preview benefit was obtained only for the block design; randomly changing target and distracter features disrupted the preview benefit. However, the preview benefit was restored when the distracters were organized in spatially coherent blocks. These findings imply that the temporal segregation of old and new items is sufficient for visual marking in near-efficient single-feature search, while in conjunction search it is not. We propose a supplanting grouping principle for the preview benefit: When the new items add a new color, conjunction search is initialized and attentional resources are withdrawn from the marking mechanism. Visual marking can be restored by a second grouping principle that joins with temporal asynchrony. This principle can be either spatial or feature based. In the case of the latter, repetition priming is necessary to establish joint grouping by color and temporal asynchrony. PMID:26382004

  20. Long-Term Memory Search across the Visual Brain

    PubMed Central

    Fedurco, Milan

    2012-01-01

    Signal transmission from the human retina to visual cortex and connectivity of visual brain areas are relatively well understood. How specific visual perceptions transform into corresponding long-term memories remains unknown. Here, I will review recent Blood Oxygenation Level-Dependent functional Magnetic Resonance Imaging (BOLD fMRI) in humans together with molecular biology studies (animal models) aiming to understand how the retinal image gets transformed into so-called visual (retinotropic) maps. The broken object paradigm has been chosen in order to illustrate the complexity of multisensory perception of simple objects subject to visual —rather than semantic— type of memory encoding. The author explores how amygdala projections to the visual cortex affect the memory formation and proposes the choice of experimental techniques needed to explain our massive visual memory capacity. Maintenance of the visual long-term memories is suggested to require recycling of GluR2-containing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPAR) and β2-adrenoreceptors at the postsynaptic membrane, which critically depends on the catalytic activity of the N-ethylmaleimide-sensitive factor (NSF) and protein kinase PKMζ. PMID:22900206

  1. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  2. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  3. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    ERIC Educational Resources Information Center

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  4. Brief Report: Eye Movements during Visual Search Tasks Indicate Enhanced Stimulus Discriminability in Subjects with PDD

    ERIC Educational Resources Information Center

    Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace

    2008-01-01

    Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…

  5. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  6. Visual Search and Line Bisection in Hemianopia: Computational Modelling of Cortical Compensatory Mechanisms and Comparison with Hemineglect

    PubMed Central

    Lanyon, Linda J.; Barton, Jason J. S.

    2013-01-01

    Hemianopia patients have lost vision from the contralateral hemifield, but make behavioural adjustments to compensate for this field loss. As a result, their visual performance and behaviour contrast with those of hemineglect patients who fail to attend to objects contralateral to their lesion. These conditions differ in their ocular fixations and perceptual judgments. During visual search, hemianopic patients make more fixations in contralesional space while hemineglect patients make fewer. During line bisection, hemianopic patients fixate the contralesional line segment more and make a small contralesional bisection error, while hemineglect patients make few contralesional fixations and a larger ipsilesional bisection error. Hence, there is an attentional failure for contralesional space in hemineglect but a compensatory adaptation to attend more to the blind side in hemianopia. A challenge for models of visual attentional processes is to show how compensation is achieved in hemianopia, and why such processes are hindered or inaccessible in hemineglect. We used a neurophysiology-derived computational model to examine possible cortical compensatory processes in simulated hemianopia from a V1 lesion and compared results with those obtained with the same processes under conditions of simulated hemineglect from a parietal lesion. A spatial compensatory bias to increase attention contralesionally replicated hemianopic scanning patterns during visual search but not during line bisection. To reproduce the latter required a second process, an extrastriate lateral connectivity facilitating form completion into the blind field: this allowed accurate placement of fixations on contralesional stimuli and reproduced fixation patterns and the contralesional bisection error of hemianopia. Neither of these two cortical compensatory processes was effective in ameliorating the ipsilesional bias in the hemineglect model. Our results replicate normal and pathological patterns of

  7. Performance of visual search tasks from various types of contour information.

    PubMed

    Itan, Liron; Yitzhaky, Yitzhak

    2013-03-01

    A recently proposed visual aid for patients with a restricted visual field (tunnel vision) combines a see-through head-mounted display and a simultaneous minified contour view of the wide-field image of the environment. Such a widening of the effective visual field is helpful for tasks, such as visual search, mobility, and orientation. The sufficiency of image contours for performing everyday visual tasks is of major importance for this application, as well as for other applications, and for basic understanding of human vision. This research aims is to examine and compare the use of different types of automatically created contours, and contour representations, for practical everyday visual operations using commonly observed images. The visual operations include visual searching for items, such as cutlery, housewares, etc. Considering different recognition levels, identification of an object is distinguished from mere detection (when the object is not necessarily identified). Some nonconventional visual-based contour representations were developed for this purpose. Experiments were performed with normal-vision subjects by superposing contours of the wide field of the scene over a narrow field (see-through) background. From the results, it appears that about 85% success is obtained for searched object identification when the best contour versions are employed. Pilot experiments with video simulations are reported at the end of the paper. PMID:23456115

  8. Visual height intolerance and acrophobia: clinical characteristics and comorbidity patterns.

    PubMed

    Kapfhammer, Hans-Peter; Huppert, Doreen; Grill, Eva; Fitz, Werner; Brandt, Thomas

    2015-08-01

    The purpose of this study was to estimate the general population lifetime and point prevalence of visual height intolerance and acrophobia, to define their clinical characteristics, and to determine their anxious and depressive comorbidities. A case-control study was conducted within a German population-based cross-sectional telephone survey. A representative sample of 2,012 individuals aged 14 and above was selected. Defined neurological conditions (migraine, Menière's disease, motion sickness), symptom pattern, age of first manifestation, precipitating height stimuli, course of illness, psychosocial impairment, and comorbidity patterns (anxiety conditions, depressive disorders according to DSM-IV-TR) for vHI and acrophobia were assessed. The lifetime prevalence of vHI was 28.5% (women 32.4%, men 24.5%). Initial attacks occurred predominantly (36%) in the second decade. A rapid generalization to other height stimuli and a chronic course of illness with at least moderate impairment were observed. A total of 22.5% of individuals with vHI experienced the intensity of panic attacks. The lifetime prevalence of acrophobia was 6.4% (women 8.6%, men 4.1%), and point prevalence was 2.0% (women 2.8%; men 1.1%). VHI and even more acrophobia were associated with high rates of comorbid anxious and depressive conditions. Migraine was both a significant predictor of later acrophobia and a significant consequence of previous acrophobia. VHI affects nearly a third of the general population; in more than 20% of these persons, vHI occasionally develops into panic attacks and in 6.4%, it escalates to acrophobia. Symptoms and degree of social impairment form a continuum of mild to seriously distressing conditions in susceptible subjects. PMID:25262317

  9. Pigeons show efficient visual search by category: effects of typicality and practice.

    PubMed

    Ohkita, Midori; Jitsumori, Masako

    2012-11-01

    Three experiments investigated category search in pigeons, using an artificial category created by morphing of human faces. Four pigeons were trained to search for category members among nonmembers, with each target item consisting of an item-specific component and a common component diagnostic of the category. Experiment 1 found that search was more efficient with homogeneous than heterogeneous distractors. In Experiment 2, the pigeons successfully searched for target exemplars having novel item-specific components. Practice including these items enabled the pigeons to efficiently search for the highly familiar members. The efficient search transferred immediately to more typical novel exemplars in Experiment 3. With further practice, the pigeons eventually developed efficient search for individual less typical exemplars. Results are discussed in the context of visual search theories and automatic processing of individual exemplars. PMID:23022550

  10. Mouse Visual Neocortex Supports Multiple Stereotyped Patterns of Microcircuit Activity

    PubMed Central

    Sadovsky, Alexander J.

    2014-01-01

    Spiking correlations between neocortical neurons provide insight into the underlying synaptic connectivity that defines cortical microcircuitry. Here, using two-photon calcium fluorescence imaging, we observed the simultaneous dynamics of hundreds of neurons in slices of mouse primary visual cortex (V1). Consistent with a balance of excitation and inhibition, V1 dynamics were characterized by a linear scaling between firing rate and circuit size. Using lagged firing correlations between neurons, we generated functional wiring diagrams to evaluate the topological features of V1 microcircuitry. We found that circuit connectivity exhibited both cyclic graph motifs, indicating recurrent wiring, and acyclic graph motifs, indicating feedforward wiring. After overlaying the functional wiring diagrams onto the imaged field of view, we found properties consistent with Rentian scaling: wiring diagrams were topologically efficient because they minimized wiring with a modular architecture. Within single imaged fields of view, V1 contained multiple discrete circuits that were overlapping and highly interdigitated but were still distinct from one another. The majority of neurons that were shared between circuits displayed peri-event spiking activity whose timing was specific to the active circuit, whereas spike times for a smaller percentage of neurons were invariant to circuit identity. These data provide evidence that V1 microcircuitry exhibits balanced dynamics, is efficiently arranged in anatomical space, and is capable of supporting a diversity of multineuron spike firing patterns from overlapping sets of neurons. PMID:24899701